Oct 5 02:42:02 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Oct 5 02:42:02 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Oct 5 02:42:02 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 5 02:42:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 5 02:42:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 5 02:42:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 5 02:42:02 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 5 02:42:02 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 5 02:42:02 localhost kernel: signal: max sigframe size: 1776 Oct 5 02:42:02 localhost kernel: BIOS-provided physical RAM map: Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 5 02:42:02 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Oct 5 02:42:02 localhost kernel: NX (Execute Disable) protection: active Oct 5 02:42:02 localhost kernel: SMBIOS 2.8 present. Oct 5 02:42:02 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 5 02:42:02 localhost kernel: Hypervisor detected: KVM Oct 5 02:42:02 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 5 02:42:02 localhost kernel: kvm-clock: using sched offset of 3821635354 cycles Oct 5 02:42:02 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 5 02:42:02 localhost kernel: tsc: Detected 2799.998 MHz processor Oct 5 02:42:02 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Oct 5 02:42:02 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 5 02:42:02 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Oct 5 02:42:02 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Oct 5 02:42:02 localhost kernel: Using GB pages for direct mapping Oct 5 02:42:02 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Oct 5 02:42:02 localhost kernel: ACPI: Early table checksum verification disabled Oct 5 02:42:02 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 5 02:42:02 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:02 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:02 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:02 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Oct 5 02:42:02 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:02 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:02 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Oct 5 02:42:02 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Oct 5 02:42:02 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Oct 5 02:42:02 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Oct 5 02:42:02 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Oct 5 02:42:02 localhost kernel: No NUMA configuration found Oct 5 02:42:02 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Oct 5 02:42:02 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Oct 5 02:42:02 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Oct 5 02:42:02 localhost kernel: Zone ranges: Oct 5 02:42:02 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 5 02:42:02 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 5 02:42:02 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Oct 5 02:42:02 localhost kernel: Device empty Oct 5 02:42:02 localhost kernel: Movable zone start for each node Oct 5 02:42:02 localhost kernel: Early memory node ranges Oct 5 02:42:02 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 5 02:42:02 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Oct 5 02:42:02 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Oct 5 02:42:02 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Oct 5 02:42:02 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 5 02:42:02 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 5 02:42:02 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Oct 5 02:42:02 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Oct 5 02:42:02 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 5 02:42:02 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 5 02:42:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 5 02:42:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 5 02:42:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 5 02:42:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 5 02:42:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 5 02:42:02 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 5 02:42:02 localhost kernel: TSC deadline timer available Oct 5 02:42:02 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Oct 5 02:42:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Oct 5 02:42:02 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Oct 5 02:42:02 localhost kernel: Booting paravirtualized kernel on KVM Oct 5 02:42:02 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 5 02:42:02 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Oct 5 02:42:02 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Oct 5 02:42:02 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Oct 5 02:42:02 localhost kernel: Fallback order for Node 0: 0 Oct 5 02:42:02 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Oct 5 02:42:02 localhost kernel: Policy zone: Normal Oct 5 02:42:02 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 5 02:42:02 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Oct 5 02:42:02 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Oct 5 02:42:02 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 5 02:42:02 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 5 02:42:02 localhost kernel: software IO TLB: area num 8. Oct 5 02:42:02 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Oct 5 02:42:02 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Oct 5 02:42:02 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Oct 5 02:42:02 localhost kernel: ftrace: allocating 44803 entries in 176 pages Oct 5 02:42:02 localhost kernel: ftrace: allocated 176 pages with 3 groups Oct 5 02:42:02 localhost kernel: Dynamic Preempt: voluntary Oct 5 02:42:02 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Oct 5 02:42:02 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Oct 5 02:42:02 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Oct 5 02:42:02 localhost kernel: #011Rude variant of Tasks RCU enabled. Oct 5 02:42:02 localhost kernel: #011Tracing variant of Tasks RCU enabled. Oct 5 02:42:02 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 5 02:42:02 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Oct 5 02:42:02 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Oct 5 02:42:02 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 5 02:42:02 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Oct 5 02:42:02 localhost kernel: random: crng init done (trusting CPU's manufacturer) Oct 5 02:42:02 localhost kernel: Console: colour VGA+ 80x25 Oct 5 02:42:02 localhost kernel: printk: console [tty0] enabled Oct 5 02:42:02 localhost kernel: printk: console [ttyS0] enabled Oct 5 02:42:02 localhost kernel: ACPI: Core revision 20211217 Oct 5 02:42:02 localhost kernel: APIC: Switch to symmetric I/O mode setup Oct 5 02:42:02 localhost kernel: x2apic enabled Oct 5 02:42:02 localhost kernel: Switched APIC routing to physical x2apic. Oct 5 02:42:02 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 5 02:42:02 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Oct 5 02:42:02 localhost kernel: pid_max: default: 32768 minimum: 301 Oct 5 02:42:02 localhost kernel: LSM: Security Framework initializing Oct 5 02:42:02 localhost kernel: Yama: becoming mindful. Oct 5 02:42:02 localhost kernel: SELinux: Initializing. Oct 5 02:42:02 localhost kernel: LSM support for eBPF active Oct 5 02:42:02 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 5 02:42:02 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 5 02:42:02 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 5 02:42:02 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 5 02:42:02 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 5 02:42:02 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 5 02:42:02 localhost kernel: Spectre V2 : Mitigation: Retpolines Oct 5 02:42:02 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 5 02:42:02 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 5 02:42:02 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 5 02:42:02 localhost kernel: RETBleed: Mitigation: untrained return thunk Oct 5 02:42:02 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 5 02:42:02 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 5 02:42:02 localhost kernel: Freeing SMP alternatives memory: 36K Oct 5 02:42:02 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 5 02:42:02 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Oct 5 02:42:02 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 5 02:42:02 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 5 02:42:02 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 5 02:42:02 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 5 02:42:02 localhost kernel: ... version: 0 Oct 5 02:42:02 localhost kernel: ... bit width: 48 Oct 5 02:42:02 localhost kernel: ... generic registers: 6 Oct 5 02:42:02 localhost kernel: ... value mask: 0000ffffffffffff Oct 5 02:42:02 localhost kernel: ... max period: 00007fffffffffff Oct 5 02:42:02 localhost kernel: ... fixed-purpose events: 0 Oct 5 02:42:02 localhost kernel: ... event mask: 000000000000003f Oct 5 02:42:02 localhost kernel: rcu: Hierarchical SRCU implementation. Oct 5 02:42:02 localhost kernel: rcu: #011Max phase no-delay instances is 400. Oct 5 02:42:02 localhost kernel: smp: Bringing up secondary CPUs ... Oct 5 02:42:02 localhost kernel: x86: Booting SMP configuration: Oct 5 02:42:02 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Oct 5 02:42:02 localhost kernel: smp: Brought up 1 node, 8 CPUs Oct 5 02:42:02 localhost kernel: smpboot: Max logical packages: 8 Oct 5 02:42:02 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Oct 5 02:42:02 localhost kernel: node 0 deferred pages initialised in 24ms Oct 5 02:42:02 localhost kernel: devtmpfs: initialized Oct 5 02:42:02 localhost kernel: x86/mm: Memory block size: 128MB Oct 5 02:42:02 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 5 02:42:02 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Oct 5 02:42:02 localhost kernel: pinctrl core: initialized pinctrl subsystem Oct 5 02:42:02 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 5 02:42:02 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Oct 5 02:42:02 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 5 02:42:02 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 5 02:42:02 localhost kernel: audit: initializing netlink subsys (disabled) Oct 5 02:42:02 localhost kernel: audit: type=2000 audit(1759646521.466:1): state=initialized audit_enabled=0 res=1 Oct 5 02:42:02 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Oct 5 02:42:02 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 5 02:42:02 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Oct 5 02:42:02 localhost kernel: cpuidle: using governor menu Oct 5 02:42:02 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Oct 5 02:42:02 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 5 02:42:02 localhost kernel: PCI: Using configuration type 1 for base access Oct 5 02:42:02 localhost kernel: PCI: Using configuration type 1 for extended access Oct 5 02:42:02 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 5 02:42:02 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Oct 5 02:42:02 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 5 02:42:02 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 5 02:42:02 localhost kernel: cryptd: max_cpu_qlen set to 1000 Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(Module Device) Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(Processor Device) Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 5 02:42:02 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 5 02:42:02 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 5 02:42:02 localhost kernel: ACPI: Interpreter enabled Oct 5 02:42:02 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Oct 5 02:42:02 localhost kernel: ACPI: Using IOAPIC for interrupt routing Oct 5 02:42:02 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 5 02:42:02 localhost kernel: PCI: Using E820 reservations for host bridge windows Oct 5 02:42:02 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 5 02:42:02 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 5 02:42:02 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Oct 5 02:42:02 localhost kernel: acpiphp: Slot [3] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [4] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [5] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [6] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [7] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [8] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [9] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [10] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [11] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [12] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [13] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [14] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [15] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [16] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [17] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [18] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [19] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [20] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [21] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [22] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [23] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [24] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [25] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [26] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [27] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [28] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [29] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [30] registered Oct 5 02:42:02 localhost kernel: acpiphp: Slot [31] registered Oct 5 02:42:02 localhost kernel: PCI host bridge to bus 0000:00 Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 5 02:42:02 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 5 02:42:02 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 5 02:42:02 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 5 02:42:02 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 5 02:42:02 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 5 02:42:02 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 5 02:42:02 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 5 02:42:02 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 5 02:42:02 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 5 02:42:02 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 5 02:42:02 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 5 02:42:02 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 5 02:42:02 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 5 02:42:02 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 5 02:42:02 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 5 02:42:02 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 5 02:42:02 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 5 02:42:02 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Oct 5 02:42:02 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 5 02:42:02 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 5 02:42:02 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 5 02:42:02 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 5 02:42:02 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 5 02:42:02 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 5 02:42:02 localhost kernel: iommu: Default domain type: Translated Oct 5 02:42:02 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 5 02:42:02 localhost kernel: SCSI subsystem initialized Oct 5 02:42:02 localhost kernel: ACPI: bus type USB registered Oct 5 02:42:02 localhost kernel: usbcore: registered new interface driver usbfs Oct 5 02:42:02 localhost kernel: usbcore: registered new interface driver hub Oct 5 02:42:02 localhost kernel: usbcore: registered new device driver usb Oct 5 02:42:02 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Oct 5 02:42:02 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 5 02:42:02 localhost kernel: PTP clock support registered Oct 5 02:42:02 localhost kernel: EDAC MC: Ver: 3.0.0 Oct 5 02:42:02 localhost kernel: NetLabel: Initializing Oct 5 02:42:02 localhost kernel: NetLabel: domain hash size = 128 Oct 5 02:42:02 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Oct 5 02:42:02 localhost kernel: NetLabel: unlabeled traffic allowed by default Oct 5 02:42:02 localhost kernel: PCI: Using ACPI for IRQ routing Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 5 02:42:02 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 5 02:42:02 localhost kernel: vgaarb: loaded Oct 5 02:42:02 localhost kernel: clocksource: Switched to clocksource kvm-clock Oct 5 02:42:02 localhost kernel: VFS: Disk quotas dquot_6.6.0 Oct 5 02:42:02 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 5 02:42:02 localhost kernel: pnp: PnP ACPI init Oct 5 02:42:02 localhost kernel: pnp: PnP ACPI: found 5 devices Oct 5 02:42:02 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 5 02:42:02 localhost kernel: NET: Registered PF_INET protocol family Oct 5 02:42:02 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 5 02:42:02 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Oct 5 02:42:02 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 5 02:42:02 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 5 02:42:02 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 5 02:42:02 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Oct 5 02:42:02 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Oct 5 02:42:02 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Oct 5 02:42:02 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Oct 5 02:42:02 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 5 02:42:02 localhost kernel: NET: Registered PF_XDP protocol family Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Oct 5 02:42:02 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Oct 5 02:42:02 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 5 02:42:02 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 5 02:42:02 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 5 02:42:02 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 31117 usecs Oct 5 02:42:02 localhost kernel: PCI: CLS 0 bytes, default 64 Oct 5 02:42:02 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 5 02:42:02 localhost kernel: Trying to unpack rootfs image as initramfs... Oct 5 02:42:02 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Oct 5 02:42:02 localhost kernel: ACPI: bus type thunderbolt registered Oct 5 02:42:02 localhost kernel: Initialise system trusted keyrings Oct 5 02:42:02 localhost kernel: Key type blacklist registered Oct 5 02:42:02 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Oct 5 02:42:02 localhost kernel: zbud: loaded Oct 5 02:42:02 localhost kernel: integrity: Platform Keyring initialized Oct 5 02:42:02 localhost kernel: NET: Registered PF_ALG protocol family Oct 5 02:42:02 localhost kernel: xor: automatically using best checksumming function avx Oct 5 02:42:02 localhost kernel: Key type asymmetric registered Oct 5 02:42:02 localhost kernel: Asymmetric key parser 'x509' registered Oct 5 02:42:02 localhost kernel: Running certificate verification selftests Oct 5 02:42:02 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Oct 5 02:42:02 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Oct 5 02:42:02 localhost kernel: io scheduler mq-deadline registered Oct 5 02:42:02 localhost kernel: io scheduler kyber registered Oct 5 02:42:02 localhost kernel: io scheduler bfq registered Oct 5 02:42:02 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Oct 5 02:42:02 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Oct 5 02:42:02 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Oct 5 02:42:02 localhost kernel: ACPI: button: Power Button [PWRF] Oct 5 02:42:02 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 5 02:42:02 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 5 02:42:02 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 5 02:42:02 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 5 02:42:02 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 5 02:42:02 localhost kernel: Non-volatile memory driver v1.3 Oct 5 02:42:02 localhost kernel: rdac: device handler registered Oct 5 02:42:02 localhost kernel: hp_sw: device handler registered Oct 5 02:42:02 localhost kernel: emc: device handler registered Oct 5 02:42:02 localhost kernel: alua: device handler registered Oct 5 02:42:02 localhost kernel: libphy: Fixed MDIO Bus: probed Oct 5 02:42:02 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Oct 5 02:42:02 localhost kernel: ehci-pci: EHCI PCI platform driver Oct 5 02:42:02 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Oct 5 02:42:02 localhost kernel: ohci-pci: OHCI PCI platform driver Oct 5 02:42:02 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Oct 5 02:42:02 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 5 02:42:02 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 5 02:42:02 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 5 02:42:02 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Oct 5 02:42:02 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Oct 5 02:42:02 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Oct 5 02:42:02 localhost kernel: usb usb1: Product: UHCI Host Controller Oct 5 02:42:02 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Oct 5 02:42:02 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Oct 5 02:42:02 localhost kernel: hub 1-0:1.0: USB hub found Oct 5 02:42:02 localhost kernel: hub 1-0:1.0: 2 ports detected Oct 5 02:42:02 localhost kernel: usbcore: registered new interface driver usbserial_generic Oct 5 02:42:02 localhost kernel: usbserial: USB Serial support registered for generic Oct 5 02:42:02 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 5 02:42:02 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 5 02:42:02 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 5 02:42:02 localhost kernel: mousedev: PS/2 mouse device common for all mice Oct 5 02:42:02 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 5 02:42:02 localhost kernel: rtc_cmos 00:04: registered as rtc0 Oct 5 02:42:02 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Oct 5 02:42:02 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-05T06:42:01 UTC (1759646521) Oct 5 02:42:02 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 5 02:42:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Oct 5 02:42:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Oct 5 02:42:02 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Oct 5 02:42:02 localhost kernel: usbcore: registered new interface driver usbhid Oct 5 02:42:02 localhost kernel: usbhid: USB HID core driver Oct 5 02:42:02 localhost kernel: drop_monitor: Initializing network drop monitor service Oct 5 02:42:02 localhost kernel: Initializing XFRM netlink socket Oct 5 02:42:02 localhost kernel: NET: Registered PF_INET6 protocol family Oct 5 02:42:02 localhost kernel: Segment Routing with IPv6 Oct 5 02:42:02 localhost kernel: NET: Registered PF_PACKET protocol family Oct 5 02:42:02 localhost kernel: mpls_gso: MPLS GSO support Oct 5 02:42:02 localhost kernel: IPI shorthand broadcast: enabled Oct 5 02:42:02 localhost kernel: AVX2 version of gcm_enc/dec engaged. Oct 5 02:42:02 localhost kernel: AES CTR mode by8 optimization enabled Oct 5 02:42:02 localhost kernel: sched_clock: Marking stable (762474740, 183308805)->(1072035433, -126251888) Oct 5 02:42:02 localhost kernel: registered taskstats version 1 Oct 5 02:42:02 localhost kernel: Loading compiled-in X.509 certificates Oct 5 02:42:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Oct 5 02:42:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Oct 5 02:42:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Oct 5 02:42:02 localhost kernel: zswap: loaded using pool lzo/zbud Oct 5 02:42:02 localhost kernel: page_owner is disabled Oct 5 02:42:02 localhost kernel: Key type big_key registered Oct 5 02:42:02 localhost kernel: Freeing initrd memory: 74232K Oct 5 02:42:02 localhost kernel: Key type encrypted registered Oct 5 02:42:02 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Oct 5 02:42:02 localhost kernel: Loading compiled-in module X.509 certificates Oct 5 02:42:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Oct 5 02:42:02 localhost kernel: ima: Allocated hash algorithm: sha256 Oct 5 02:42:02 localhost kernel: ima: No architecture policies found Oct 5 02:42:02 localhost kernel: evm: Initialising EVM extended attributes: Oct 5 02:42:02 localhost kernel: evm: security.selinux Oct 5 02:42:02 localhost kernel: evm: security.SMACK64 (disabled) Oct 5 02:42:02 localhost kernel: evm: security.SMACK64EXEC (disabled) Oct 5 02:42:02 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Oct 5 02:42:02 localhost kernel: evm: security.SMACK64MMAP (disabled) Oct 5 02:42:02 localhost kernel: evm: security.apparmor (disabled) Oct 5 02:42:02 localhost kernel: evm: security.ima Oct 5 02:42:02 localhost kernel: evm: security.capability Oct 5 02:42:02 localhost kernel: evm: HMAC attrs: 0x1 Oct 5 02:42:02 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Oct 5 02:42:02 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Oct 5 02:42:02 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Oct 5 02:42:02 localhost kernel: usb 1-1: Product: QEMU USB Tablet Oct 5 02:42:02 localhost kernel: usb 1-1: Manufacturer: QEMU Oct 5 02:42:02 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Oct 5 02:42:02 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Oct 5 02:42:02 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Oct 5 02:42:02 localhost kernel: Freeing unused decrypted memory: 2036K Oct 5 02:42:02 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Oct 5 02:42:02 localhost kernel: Write protecting the kernel read-only data: 26624k Oct 5 02:42:02 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 5 02:42:02 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Oct 5 02:42:02 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Oct 5 02:42:02 localhost kernel: Run /init as init process Oct 5 02:42:02 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 5 02:42:02 localhost systemd[1]: Detected virtualization kvm. Oct 5 02:42:02 localhost systemd[1]: Detected architecture x86-64. Oct 5 02:42:02 localhost systemd[1]: Running in initrd. Oct 5 02:42:02 localhost systemd[1]: No hostname configured, using default hostname. Oct 5 02:42:02 localhost systemd[1]: Hostname set to . Oct 5 02:42:02 localhost systemd[1]: Initializing machine ID from VM UUID. Oct 5 02:42:02 localhost systemd[1]: Queued start job for default target Initrd Default Target. Oct 5 02:42:02 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Oct 5 02:42:02 localhost systemd[1]: Reached target Local Encrypted Volumes. Oct 5 02:42:02 localhost systemd[1]: Reached target Initrd /usr File System. Oct 5 02:42:02 localhost systemd[1]: Reached target Local File Systems. Oct 5 02:42:02 localhost systemd[1]: Reached target Path Units. Oct 5 02:42:02 localhost systemd[1]: Reached target Slice Units. Oct 5 02:42:02 localhost systemd[1]: Reached target Swaps. Oct 5 02:42:02 localhost systemd[1]: Reached target Timer Units. Oct 5 02:42:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Oct 5 02:42:02 localhost systemd[1]: Listening on Journal Socket (/dev/log). Oct 5 02:42:02 localhost systemd[1]: Listening on Journal Socket. Oct 5 02:42:02 localhost systemd[1]: Listening on udev Control Socket. Oct 5 02:42:02 localhost systemd[1]: Listening on udev Kernel Socket. Oct 5 02:42:02 localhost systemd[1]: Reached target Socket Units. Oct 5 02:42:02 localhost systemd[1]: Starting Create List of Static Device Nodes... Oct 5 02:42:02 localhost systemd[1]: Starting Journal Service... Oct 5 02:42:02 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 02:42:02 localhost systemd[1]: Starting Create System Users... Oct 5 02:42:02 localhost systemd[1]: Starting Setup Virtual Console... Oct 5 02:42:02 localhost systemd[1]: Finished Create List of Static Device Nodes. Oct 5 02:42:02 localhost systemd-journald[284]: Journal started Oct 5 02:42:02 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/8a2ee9a27fe74677a151037462d3ba7a) is 8.0M, max 314.7M, 306.7M free. Oct 5 02:42:02 localhost systemd-modules-load[285]: Module 'msr' is built in Oct 5 02:42:02 localhost systemd[1]: Started Journal Service. Oct 5 02:42:02 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 02:42:02 localhost systemd[1]: Finished Setup Virtual Console. Oct 5 02:42:02 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Oct 5 02:42:02 localhost systemd[1]: Starting dracut cmdline hook... Oct 5 02:42:02 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 02:42:02 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Oct 5 02:42:02 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Oct 5 02:42:02 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Oct 5 02:42:02 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Oct 5 02:42:02 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 02:42:02 localhost systemd[1]: Finished Create System Users. Oct 5 02:42:02 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Oct 5 02:42:02 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Oct 5 02:42:02 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 5 02:42:02 localhost systemd[1]: Starting Create Volatile Files and Directories... Oct 5 02:42:02 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Oct 5 02:42:02 localhost systemd[1]: Finished Create Volatile Files and Directories. Oct 5 02:42:02 localhost systemd[1]: Finished dracut cmdline hook. Oct 5 02:42:02 localhost systemd[1]: Starting dracut pre-udev hook... Oct 5 02:42:02 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 5 02:42:02 localhost kernel: device-mapper: uevent: version 1.0.3 Oct 5 02:42:02 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Oct 5 02:42:02 localhost kernel: RPC: Registered named UNIX socket transport module. Oct 5 02:42:02 localhost kernel: RPC: Registered udp transport module. Oct 5 02:42:02 localhost kernel: RPC: Registered tcp transport module. Oct 5 02:42:02 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Oct 5 02:42:02 localhost rpc.statd[407]: Version 2.5.4 starting Oct 5 02:42:02 localhost rpc.statd[407]: Initializing NSM state Oct 5 02:42:02 localhost rpc.idmapd[412]: Setting log level to 0 Oct 5 02:42:02 localhost systemd[1]: Finished dracut pre-udev hook. Oct 5 02:42:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 5 02:42:02 localhost systemd-udevd[425]: Using default interface naming scheme 'rhel-9.0'. Oct 5 02:42:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 5 02:42:02 localhost systemd[1]: Starting dracut pre-trigger hook... Oct 5 02:42:02 localhost systemd[1]: Finished dracut pre-trigger hook. Oct 5 02:42:02 localhost systemd[1]: Starting Coldplug All udev Devices... Oct 5 02:42:02 localhost systemd[1]: Finished Coldplug All udev Devices. Oct 5 02:42:02 localhost systemd[1]: Reached target System Initialization. Oct 5 02:42:02 localhost systemd[1]: Reached target Basic System. Oct 5 02:42:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Oct 5 02:42:02 localhost systemd[1]: Reached target Network. Oct 5 02:42:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Oct 5 02:42:02 localhost systemd[1]: Starting dracut initqueue hook... Oct 5 02:42:02 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Oct 5 02:42:02 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 5 02:42:02 localhost kernel: GPT:20971519 != 838860799 Oct 5 02:42:02 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Oct 5 02:42:02 localhost kernel: GPT:20971519 != 838860799 Oct 5 02:42:02 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Oct 5 02:42:02 localhost kernel: vda: vda1 vda2 vda3 vda4 Oct 5 02:42:02 localhost kernel: scsi host0: ata_piix Oct 5 02:42:02 localhost kernel: scsi host1: ata_piix Oct 5 02:42:02 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Oct 5 02:42:02 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Oct 5 02:42:02 localhost systemd-udevd[427]: Network interface NamePolicy= disabled on kernel command line. Oct 5 02:42:02 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Oct 5 02:42:02 localhost systemd[1]: Reached target Initrd Root Device. Oct 5 02:42:03 localhost kernel: ata1: found unknown device (class 0) Oct 5 02:42:03 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 5 02:42:03 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 5 02:42:03 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Oct 5 02:42:03 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 5 02:42:03 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 5 02:42:03 localhost systemd[1]: Finished dracut initqueue hook. Oct 5 02:42:03 localhost systemd[1]: Reached target Preparation for Remote File Systems. Oct 5 02:42:03 localhost systemd[1]: Reached target Remote Encrypted Volumes. Oct 5 02:42:03 localhost systemd[1]: Reached target Remote File Systems. Oct 5 02:42:03 localhost systemd[1]: Starting dracut pre-mount hook... Oct 5 02:42:03 localhost systemd[1]: Finished dracut pre-mount hook. Oct 5 02:42:03 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Oct 5 02:42:03 localhost systemd-fsck[513]: /usr/sbin/fsck.xfs: XFS file system. Oct 5 02:42:03 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Oct 5 02:42:03 localhost systemd[1]: Mounting /sysroot... Oct 5 02:42:03 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Oct 5 02:42:03 localhost kernel: XFS (vda4): Mounting V5 Filesystem Oct 5 02:42:03 localhost kernel: XFS (vda4): Ending clean mount Oct 5 02:42:03 localhost systemd[1]: Mounted /sysroot. Oct 5 02:42:03 localhost systemd[1]: Reached target Initrd Root File System. Oct 5 02:42:03 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Oct 5 02:42:03 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Oct 5 02:42:03 localhost systemd[1]: Reached target Initrd File Systems. Oct 5 02:42:03 localhost systemd[1]: Reached target Initrd Default Target. Oct 5 02:42:03 localhost systemd[1]: Starting dracut mount hook... Oct 5 02:42:03 localhost systemd[1]: Finished dracut mount hook. Oct 5 02:42:03 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Oct 5 02:42:03 localhost rpc.idmapd[412]: exiting on signal 15 Oct 5 02:42:03 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Oct 5 02:42:03 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Oct 5 02:42:03 localhost systemd[1]: Stopped target Network. Oct 5 02:42:03 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Oct 5 02:42:03 localhost systemd[1]: Stopped target Timer Units. Oct 5 02:42:03 localhost systemd[1]: dbus.socket: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Oct 5 02:42:03 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Oct 5 02:42:03 localhost systemd[1]: Stopped target Initrd Default Target. Oct 5 02:42:03 localhost systemd[1]: Stopped target Basic System. Oct 5 02:42:03 localhost systemd[1]: Stopped target Initrd Root Device. Oct 5 02:42:03 localhost systemd[1]: Stopped target Initrd /usr File System. Oct 5 02:42:03 localhost systemd[1]: Stopped target Path Units. Oct 5 02:42:03 localhost systemd[1]: Stopped target Remote File Systems. Oct 5 02:42:03 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Oct 5 02:42:03 localhost systemd[1]: Stopped target Slice Units. Oct 5 02:42:03 localhost systemd[1]: Stopped target Socket Units. Oct 5 02:42:03 localhost systemd[1]: Stopped target System Initialization. Oct 5 02:42:03 localhost systemd[1]: Stopped target Local File Systems. Oct 5 02:42:03 localhost systemd[1]: Stopped target Swaps. Oct 5 02:42:03 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut mount hook. Oct 5 02:42:03 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut pre-mount hook. Oct 5 02:42:03 localhost systemd[1]: Stopped target Local Encrypted Volumes. Oct 5 02:42:03 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Oct 5 02:42:03 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut initqueue hook. Oct 5 02:42:03 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 5 02:42:03 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 02:42:03 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Create Volatile Files and Directories. Oct 5 02:42:03 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Coldplug All udev Devices. Oct 5 02:42:03 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut pre-trigger hook. Oct 5 02:42:03 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Oct 5 02:42:03 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Setup Virtual Console. Oct 5 02:42:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Oct 5 02:42:03 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Oct 5 02:42:03 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Closed udev Control Socket. Oct 5 02:42:03 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Closed udev Kernel Socket. Oct 5 02:42:03 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut pre-udev hook. Oct 5 02:42:03 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped dracut cmdline hook. Oct 5 02:42:03 localhost systemd[1]: Starting Cleanup udev Database... Oct 5 02:42:03 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Oct 5 02:42:03 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Create List of Static Device Nodes. Oct 5 02:42:03 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Stopped Create System Users. Oct 5 02:42:03 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 5 02:42:03 localhost systemd[1]: Finished Cleanup udev Database. Oct 5 02:42:03 localhost systemd[1]: Reached target Switch Root. Oct 5 02:42:03 localhost systemd[1]: Starting Switch Root... Oct 5 02:42:03 localhost systemd[1]: Switching root. Oct 5 02:42:03 localhost systemd-journald[284]: Journal stopped Oct 5 02:42:04 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Oct 5 02:42:04 localhost kernel: audit: type=1404 audit(1759646524.067:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Oct 5 02:42:04 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 02:42:04 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 02:42:04 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 02:42:04 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 02:42:04 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 02:42:04 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 02:42:04 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 02:42:04 localhost kernel: audit: type=1403 audit(1759646524.210:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 5 02:42:04 localhost systemd[1]: Successfully loaded SELinux policy in 146.198ms. Oct 5 02:42:04 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.355ms. Oct 5 02:42:04 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 5 02:42:04 localhost systemd[1]: Detected virtualization kvm. Oct 5 02:42:04 localhost systemd[1]: Detected architecture x86-64. Oct 5 02:42:04 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 02:42:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 02:42:04 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 5 02:42:04 localhost systemd[1]: Stopped Switch Root. Oct 5 02:42:04 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 5 02:42:04 localhost systemd[1]: Created slice Slice /system/getty. Oct 5 02:42:04 localhost systemd[1]: Created slice Slice /system/modprobe. Oct 5 02:42:04 localhost systemd[1]: Created slice Slice /system/serial-getty. Oct 5 02:42:04 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Oct 5 02:42:04 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Oct 5 02:42:04 localhost systemd[1]: Created slice User and Session Slice. Oct 5 02:42:04 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Oct 5 02:42:04 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Oct 5 02:42:04 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Oct 5 02:42:04 localhost systemd[1]: Reached target Local Encrypted Volumes. Oct 5 02:42:04 localhost systemd[1]: Stopped target Switch Root. Oct 5 02:42:04 localhost systemd[1]: Stopped target Initrd File Systems. Oct 5 02:42:04 localhost systemd[1]: Stopped target Initrd Root File System. Oct 5 02:42:04 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Oct 5 02:42:04 localhost systemd[1]: Reached target Path Units. Oct 5 02:42:04 localhost systemd[1]: Reached target rpc_pipefs.target. Oct 5 02:42:04 localhost systemd[1]: Reached target Slice Units. Oct 5 02:42:04 localhost systemd[1]: Reached target Swaps. Oct 5 02:42:04 localhost systemd[1]: Reached target Local Verity Protected Volumes. Oct 5 02:42:04 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Oct 5 02:42:04 localhost systemd[1]: Reached target RPC Port Mapper. Oct 5 02:42:04 localhost systemd[1]: Listening on Process Core Dump Socket. Oct 5 02:42:04 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Oct 5 02:42:04 localhost systemd[1]: Listening on udev Control Socket. Oct 5 02:42:04 localhost systemd[1]: Listening on udev Kernel Socket. Oct 5 02:42:04 localhost systemd[1]: Mounting Huge Pages File System... Oct 5 02:42:04 localhost systemd[1]: Mounting POSIX Message Queue File System... Oct 5 02:42:04 localhost systemd[1]: Mounting Kernel Debug File System... Oct 5 02:42:04 localhost systemd[1]: Mounting Kernel Trace File System... Oct 5 02:42:04 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Oct 5 02:42:04 localhost systemd[1]: Starting Create List of Static Device Nodes... Oct 5 02:42:04 localhost systemd[1]: Starting Load Kernel Module configfs... Oct 5 02:42:04 localhost systemd[1]: Starting Load Kernel Module drm... Oct 5 02:42:04 localhost systemd[1]: Starting Load Kernel Module fuse... Oct 5 02:42:04 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Oct 5 02:42:04 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 5 02:42:04 localhost systemd[1]: Stopped File System Check on Root Device. Oct 5 02:42:04 localhost systemd[1]: Stopped Journal Service. Oct 5 02:42:04 localhost systemd[1]: Starting Journal Service... Oct 5 02:42:04 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 02:42:04 localhost systemd[1]: Starting Generate network units from Kernel command line... Oct 5 02:42:04 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Oct 5 02:42:04 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Oct 5 02:42:04 localhost systemd[1]: Starting Coldplug All udev Devices... Oct 5 02:42:04 localhost kernel: fuse: init (API version 7.36) Oct 5 02:42:04 localhost systemd[1]: Mounted Huge Pages File System. Oct 5 02:42:04 localhost systemd-journald[619]: Journal started Oct 5 02:42:04 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/19f34a97e4e878e70ef0e6e08186acc9) is 8.0M, max 314.7M, 306.7M free. Oct 5 02:42:04 localhost systemd[1]: Queued start job for default target Multi-User System. Oct 5 02:42:04 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Oct 5 02:42:04 localhost systemd-modules-load[620]: Module 'msr' is built in Oct 5 02:42:04 localhost systemd[1]: Started Journal Service. Oct 5 02:42:04 localhost systemd[1]: Mounted POSIX Message Queue File System. Oct 5 02:42:04 localhost systemd[1]: Mounted Kernel Debug File System. Oct 5 02:42:04 localhost systemd[1]: Mounted Kernel Trace File System. Oct 5 02:42:04 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Oct 5 02:42:04 localhost systemd[1]: Finished Create List of Static Device Nodes. Oct 5 02:42:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 5 02:42:04 localhost systemd[1]: Finished Load Kernel Module configfs. Oct 5 02:42:04 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 5 02:42:04 localhost systemd[1]: Finished Load Kernel Module fuse. Oct 5 02:42:04 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Oct 5 02:42:04 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 02:42:04 localhost systemd[1]: Finished Generate network units from Kernel command line. Oct 5 02:42:04 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Oct 5 02:42:05 localhost systemd[1]: Mounting FUSE Control File System... Oct 5 02:42:05 localhost systemd[1]: Mounting Kernel Configuration File System... Oct 5 02:42:05 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Oct 5 02:42:05 localhost systemd[1]: Starting Rebuild Hardware Database... Oct 5 02:42:05 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Oct 5 02:42:05 localhost kernel: ACPI: bus type drm_connector registered Oct 5 02:42:05 localhost systemd[1]: Starting Load/Save Random Seed... Oct 5 02:42:05 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 02:42:05 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/19f34a97e4e878e70ef0e6e08186acc9) is 8.0M, max 314.7M, 306.7M free. Oct 5 02:42:05 localhost systemd-journald[619]: Received client request to flush runtime journal. Oct 5 02:42:05 localhost systemd[1]: Starting Create System Users... Oct 5 02:42:05 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 5 02:42:05 localhost systemd[1]: Finished Load Kernel Module drm. Oct 5 02:42:05 localhost systemd[1]: Mounted FUSE Control File System. Oct 5 02:42:05 localhost systemd[1]: Mounted Kernel Configuration File System. Oct 5 02:42:05 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Oct 5 02:42:05 localhost systemd[1]: Finished Load/Save Random Seed. Oct 5 02:42:05 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 02:42:05 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Oct 5 02:42:05 localhost systemd[1]: Finished Coldplug All udev Devices. Oct 5 02:42:05 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Oct 5 02:42:05 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Oct 5 02:42:05 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Oct 5 02:42:05 localhost systemd[1]: Finished Create System Users. Oct 5 02:42:05 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Oct 5 02:42:05 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Oct 5 02:42:05 localhost systemd[1]: Reached target Preparation for Local File Systems. Oct 5 02:42:05 localhost systemd[1]: Set up automount EFI System Partition Automount. Oct 5 02:42:05 localhost systemd[1]: Finished Rebuild Hardware Database. Oct 5 02:42:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 5 02:42:05 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'. Oct 5 02:42:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 5 02:42:05 localhost systemd[1]: Starting Load Kernel Module configfs... Oct 5 02:42:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 5 02:42:05 localhost systemd[1]: Finished Load Kernel Module configfs. Oct 5 02:42:05 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Oct 5 02:42:05 localhost systemd-udevd[649]: Network interface NamePolicy= disabled on kernel command line. Oct 5 02:42:05 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Oct 5 02:42:05 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Oct 5 02:42:05 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 5 02:42:05 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Oct 5 02:42:05 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Oct 5 02:42:05 localhost systemd-fsck[682]: fsck.fat 4.2 (2021-01-31) Oct 5 02:42:05 localhost systemd-fsck[682]: /dev/vda2: 12 files, 1782/51145 clusters Oct 5 02:42:05 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Oct 5 02:42:05 localhost kernel: SVM: TSC scaling supported Oct 5 02:42:05 localhost kernel: kvm: Nested Virtualization enabled Oct 5 02:42:05 localhost kernel: SVM: kvm: Nested Paging enabled Oct 5 02:42:05 localhost kernel: SVM: LBR virtualization supported Oct 5 02:42:05 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 5 02:42:05 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 5 02:42:05 localhost kernel: Console: switching to colour dummy device 80x25 Oct 5 02:42:05 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 5 02:42:05 localhost kernel: [drm] features: -context_init Oct 5 02:42:05 localhost kernel: [drm] number of scanouts: 1 Oct 5 02:42:05 localhost kernel: [drm] number of cap sets: 0 Oct 5 02:42:05 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Oct 5 02:42:05 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Oct 5 02:42:05 localhost kernel: Console: switching to colour frame buffer device 128x48 Oct 5 02:42:05 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 5 02:42:05 localhost systemd[1]: Mounting /boot... Oct 5 02:42:05 localhost kernel: XFS (vda3): Mounting V5 Filesystem Oct 5 02:42:06 localhost kernel: XFS (vda3): Ending clean mount Oct 5 02:42:06 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Oct 5 02:42:06 localhost systemd[1]: Mounted /boot. Oct 5 02:42:06 localhost systemd[1]: Mounting /boot/efi... Oct 5 02:42:06 localhost systemd[1]: Mounted /boot/efi. Oct 5 02:42:06 localhost systemd[1]: Reached target Local File Systems. Oct 5 02:42:06 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Oct 5 02:42:06 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Oct 5 02:42:06 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 5 02:42:06 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 5 02:42:06 localhost systemd[1]: Starting Automatic Boot Loader Update... Oct 5 02:42:06 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Oct 5 02:42:06 localhost systemd[1]: Starting Create Volatile Files and Directories... Oct 5 02:42:06 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 717 (bootctl) Oct 5 02:42:06 localhost systemd[1]: Starting File System Check on /dev/vda2... Oct 5 02:42:06 localhost systemd[1]: Finished File System Check on /dev/vda2. Oct 5 02:42:06 localhost systemd[1]: Mounting EFI System Partition Automount... Oct 5 02:42:06 localhost systemd[1]: Mounted EFI System Partition Automount. Oct 5 02:42:06 localhost systemd[1]: Finished Automatic Boot Loader Update. Oct 5 02:42:06 localhost systemd[1]: Finished Create Volatile Files and Directories. Oct 5 02:42:06 localhost systemd[1]: Starting Security Auditing Service... Oct 5 02:42:06 localhost systemd[1]: Starting RPC Bind... Oct 5 02:42:06 localhost systemd[1]: Starting Rebuild Journal Catalog... Oct 5 02:42:06 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Oct 5 02:42:06 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable) Oct 5 02:42:06 localhost systemd[1]: Finished Rebuild Journal Catalog. Oct 5 02:42:06 localhost systemd[1]: Started RPC Bind. Oct 5 02:42:06 localhost augenrules[731]: /sbin/augenrules: No change Oct 5 02:42:06 localhost augenrules[741]: No rules Oct 5 02:42:06 localhost augenrules[741]: enabled 1 Oct 5 02:42:06 localhost augenrules[741]: failure 1 Oct 5 02:42:06 localhost augenrules[741]: pid 726 Oct 5 02:42:06 localhost augenrules[741]: rate_limit 0 Oct 5 02:42:06 localhost augenrules[741]: backlog_limit 8192 Oct 5 02:42:06 localhost augenrules[741]: lost 0 Oct 5 02:42:06 localhost augenrules[741]: backlog 0 Oct 5 02:42:06 localhost augenrules[741]: backlog_wait_time 60000 Oct 5 02:42:06 localhost augenrules[741]: backlog_wait_time_actual 0 Oct 5 02:42:06 localhost augenrules[741]: enabled 1 Oct 5 02:42:06 localhost augenrules[741]: failure 1 Oct 5 02:42:06 localhost augenrules[741]: pid 726 Oct 5 02:42:06 localhost augenrules[741]: rate_limit 0 Oct 5 02:42:06 localhost augenrules[741]: backlog_limit 8192 Oct 5 02:42:06 localhost augenrules[741]: lost 0 Oct 5 02:42:06 localhost augenrules[741]: backlog 0 Oct 5 02:42:06 localhost augenrules[741]: backlog_wait_time 60000 Oct 5 02:42:06 localhost augenrules[741]: backlog_wait_time_actual 0 Oct 5 02:42:06 localhost augenrules[741]: enabled 1 Oct 5 02:42:06 localhost augenrules[741]: failure 1 Oct 5 02:42:06 localhost augenrules[741]: pid 726 Oct 5 02:42:06 localhost augenrules[741]: rate_limit 0 Oct 5 02:42:06 localhost augenrules[741]: backlog_limit 8192 Oct 5 02:42:06 localhost augenrules[741]: lost 0 Oct 5 02:42:06 localhost augenrules[741]: backlog 0 Oct 5 02:42:06 localhost augenrules[741]: backlog_wait_time 60000 Oct 5 02:42:06 localhost augenrules[741]: backlog_wait_time_actual 0 Oct 5 02:42:06 localhost systemd[1]: Started Security Auditing Service. Oct 5 02:42:06 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Oct 5 02:42:06 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Oct 5 02:42:06 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Oct 5 02:42:06 localhost systemd[1]: Starting Update is Completed... Oct 5 02:42:06 localhost systemd[1]: Finished Update is Completed. Oct 5 02:42:06 localhost systemd[1]: Reached target System Initialization. Oct 5 02:42:06 localhost systemd[1]: Started dnf makecache --timer. Oct 5 02:42:06 localhost systemd[1]: Started Daily rotation of log files. Oct 5 02:42:06 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Oct 5 02:42:06 localhost systemd[1]: Reached target Timer Units. Oct 5 02:42:06 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Oct 5 02:42:06 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Oct 5 02:42:06 localhost systemd[1]: Reached target Socket Units. Oct 5 02:42:06 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Oct 5 02:42:06 localhost systemd[1]: Starting D-Bus System Message Bus... Oct 5 02:42:06 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 5 02:42:06 localhost systemd[1]: Started D-Bus System Message Bus. Oct 5 02:42:06 localhost systemd[1]: Reached target Basic System. Oct 5 02:42:06 localhost systemd[1]: Starting NTP client/server... Oct 5 02:42:06 localhost journal[751]: Ready Oct 5 02:42:06 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Oct 5 02:42:06 localhost systemd[1]: Started irqbalance daemon. Oct 5 02:42:06 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Oct 5 02:42:06 localhost systemd[1]: Starting System Logging Service... Oct 5 02:42:06 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 02:42:06 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 02:42:06 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 02:42:06 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 02:42:06 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Oct 5 02:42:06 localhost systemd[1]: Reached target User and Group Name Lookups. Oct 5 02:42:06 localhost systemd[1]: Starting User Login Management... Oct 5 02:42:06 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Oct 5 02:42:06 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start Oct 5 02:42:06 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Oct 5 02:42:06 localhost systemd[1]: Started System Logging Service. Oct 5 02:42:06 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 02:42:06 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data Oct 5 02:42:06 localhost chronyd[766]: Loaded seccomp filter (level 2) Oct 5 02:42:06 localhost systemd[1]: Started NTP client/server. Oct 5 02:42:06 localhost systemd-logind[760]: New seat seat0. Oct 5 02:42:06 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Oct 5 02:42:06 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 5 02:42:06 localhost systemd[1]: Started User Login Management. Oct 5 02:42:06 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 02:42:07 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sun, 05 Oct 2025 06:42:07 +0000. Up 6.33 seconds. Oct 5 02:42:07 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp5x46ye_8.mount: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Starting Hostname Service... Oct 5 02:42:07 localhost systemd[1]: Started Hostname Service. Oct 5 02:42:07 localhost systemd-hostnamed[784]: Hostname set to (static) Oct 5 02:42:07 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Oct 5 02:42:07 localhost systemd[1]: Reached target Preparation for Network. Oct 5 02:42:07 localhost systemd[1]: Starting Network Manager... Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7123] NetworkManager (version 1.42.2-1.el9) is starting... (boot:9b88e86c-1998-48d6-be72-b09955076fa0) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7128] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Oct 5 02:42:07 localhost systemd[1]: Started Network Manager. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7170] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Oct 5 02:42:07 localhost systemd[1]: Reached target Network. Oct 5 02:42:07 localhost systemd[1]: Starting Network Manager Wait Online... Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7259] manager[0x55c11517e020]: monitoring kernel firmware directory '/lib/firmware'. Oct 5 02:42:07 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7300] hostname: hostname: using hostnamed Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7301] hostname: static hostname changed from (none) to "np0005471150.novalocal" Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7315] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Oct 5 02:42:07 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Oct 5 02:42:07 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 02:42:07 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7516] manager[0x55c11517e020]: rfkill: Wi-Fi hardware radio set enabled Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7517] manager[0x55c11517e020]: rfkill: WWAN hardware radio set enabled Oct 5 02:42:07 localhost systemd[1]: Started GSSAPI Proxy Daemon. Oct 5 02:42:07 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7632] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7633] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Oct 5 02:42:07 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Oct 5 02:42:07 localhost systemd[1]: Reached target NFS client services. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7656] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7657] manager: Networking is enabled by state file Oct 5 02:42:07 localhost systemd[1]: Reached target Preparation for Remote File Systems. Oct 5 02:42:07 localhost systemd[1]: Reached target Remote File Systems. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7716] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7717] settings: Loaded settings plugin: keyfile (internal) Oct 5 02:42:07 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7755] dhcp: init: Using DHCP client 'internal' Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7761] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7787] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7795] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:42:07 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7806] device (lo): Activation: starting connection 'lo' (36a94491-7b9f-4c98-bdc7-19c743303238) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7820] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7825] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7870] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7873] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7876] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7886] device (eth0): carrier: link connected Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7890] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7897] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7923] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7930] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7932] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7937] manager: NetworkManager state is now CONNECTING Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7941] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7952] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.7957] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:42:07 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8010] dhcp4 (eth0): state changed new lease, address=38.102.83.156 Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8017] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8044] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8053] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8057] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8065] device (lo): Activation: successful, device activated. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8074] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8080] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8087] manager: NetworkManager state is now CONNECTED_SITE Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8092] device (eth0): Activation: successful, device activated. Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8101] manager: NetworkManager state is now CONNECTED_GLOBAL Oct 5 02:42:07 localhost NetworkManager[789]: [1759646527.8107] manager: startup complete Oct 5 02:42:07 localhost systemd[1]: Finished Network Manager Wait Online. Oct 5 02:42:07 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Oct 5 02:42:08 localhost cloud-init[900]: Cloud-init v. 22.1-9.el9 running 'init' at Sun, 05 Oct 2025 06:42:08 +0000. Up 7.25 seconds. Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | eth0 | True | 38.102.83.156 | 255.255.255.0 | global | fa:16:3e:c6:f3:3d | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | eth0 | True | fe80::f816:3eff:fec6:f33d/64 | . | link | fa:16:3e:c6:f3:3d | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | lo | True | ::1/128 | . | host | . | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | Route | Destination | Gateway | Interface | Flags | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: | 3 | multicast | :: | eth0 | U | Oct 5 02:42:08 localhost cloud-init[900]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 5 02:42:08 localhost systemd[1]: Starting Authorization Manager... Oct 5 02:42:08 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 02:42:08 localhost polkitd[1036]: Started polkitd version 0.117 Oct 5 02:42:08 localhost systemd[1]: Started Authorization Manager. Oct 5 02:42:12 localhost cloud-init[900]: Generating public/private rsa key pair. Oct 5 02:42:12 localhost cloud-init[900]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Oct 5 02:42:12 localhost cloud-init[900]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Oct 5 02:42:12 localhost cloud-init[900]: The key fingerprint is: Oct 5 02:42:12 localhost cloud-init[900]: SHA256:bpVaAtJQfqdhZHYTbRRP17BMI3DHYe/cZWWsgPCFu7E root@np0005471150.novalocal Oct 5 02:42:12 localhost cloud-init[900]: The key's randomart image is: Oct 5 02:42:12 localhost cloud-init[900]: +---[RSA 3072]----+ Oct 5 02:42:12 localhost cloud-init[900]: | ... =.=B*oO+=| Oct 5 02:42:12 localhost cloud-init[900]: | + + oo++O.*+| Oct 5 02:42:12 localhost cloud-init[900]: | . + + oo .+.+| Oct 5 02:42:12 localhost cloud-init[900]: | . + +o. .+o| Oct 5 02:42:12 localhost cloud-init[900]: | S ++ +| Oct 5 02:42:12 localhost cloud-init[900]: | . =E | Oct 5 02:42:12 localhost cloud-init[900]: | + | Oct 5 02:42:12 localhost cloud-init[900]: | . | Oct 5 02:42:12 localhost cloud-init[900]: | | Oct 5 02:42:12 localhost cloud-init[900]: +----[SHA256]-----+ Oct 5 02:42:12 localhost cloud-init[900]: Generating public/private ecdsa key pair. Oct 5 02:42:12 localhost cloud-init[900]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Oct 5 02:42:12 localhost cloud-init[900]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Oct 5 02:42:12 localhost cloud-init[900]: The key fingerprint is: Oct 5 02:42:12 localhost cloud-init[900]: SHA256:Cj4J7VGp0ryHuaVo0n7w2gGbe+eff2w0V+hEaJGFFXI root@np0005471150.novalocal Oct 5 02:42:12 localhost cloud-init[900]: The key's randomart image is: Oct 5 02:42:12 localhost cloud-init[900]: +---[ECDSA 256]---+ Oct 5 02:42:12 localhost cloud-init[900]: | oBE. | Oct 5 02:42:12 localhost cloud-init[900]: | . =o. | Oct 5 02:42:12 localhost cloud-init[900]: | o . . . | Oct 5 02:42:12 localhost cloud-init[900]: | + o o .| Oct 5 02:42:12 localhost cloud-init[900]: | + B S o . | Oct 5 02:42:12 localhost cloud-init[900]: | .O B . o o | Oct 5 02:42:12 localhost cloud-init[900]: | .ooX + o o | Oct 5 02:42:12 localhost cloud-init[900]: |. o++B. . + | Oct 5 02:42:12 localhost cloud-init[900]: | +=++o..o..o | Oct 5 02:42:12 localhost cloud-init[900]: +----[SHA256]-----+ Oct 5 02:42:12 localhost cloud-init[900]: Generating public/private ed25519 key pair. Oct 5 02:42:12 localhost cloud-init[900]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Oct 5 02:42:12 localhost cloud-init[900]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Oct 5 02:42:12 localhost cloud-init[900]: The key fingerprint is: Oct 5 02:42:12 localhost cloud-init[900]: SHA256:BiAer7vxWqMNPfZASigcJYtRWkdR1EFy2f0AGeNNtyE root@np0005471150.novalocal Oct 5 02:42:12 localhost cloud-init[900]: The key's randomart image is: Oct 5 02:42:12 localhost cloud-init[900]: +--[ED25519 256]--+ Oct 5 02:42:12 localhost cloud-init[900]: |.o=o=++o+++=E o | Oct 5 02:42:12 localhost cloud-init[900]: |.=+= . oo.o+oo o | Oct 5 02:42:12 localhost cloud-init[900]: |oo. . . . .o. | Oct 5 02:42:12 localhost cloud-init[900]: |.... . . | Oct 5 02:42:12 localhost cloud-init[900]: |o.o . S | Oct 5 02:42:12 localhost cloud-init[900]: |.. = . | Oct 5 02:42:12 localhost cloud-init[900]: | = B | Oct 5 02:42:12 localhost cloud-init[900]: | X = | Oct 5 02:42:12 localhost cloud-init[900]: | +.o . | Oct 5 02:42:12 localhost cloud-init[900]: +----[SHA256]-----+ Oct 5 02:42:12 localhost sm-notify[1132]: Version 2.5.4 starting Oct 5 02:42:12 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Oct 5 02:42:12 localhost systemd[1]: Reached target Cloud-config availability. Oct 5 02:42:12 localhost systemd[1]: Reached target Network is Online. Oct 5 02:42:12 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Oct 5 02:42:12 localhost sshd[1133]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Oct 5 02:42:12 localhost systemd[1]: Starting Crash recovery kernel arming... Oct 5 02:42:12 localhost systemd[1]: Starting Notify NFS peers of a restart... Oct 5 02:42:12 localhost sshd[1136]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 02:42:12 localhost systemd[1]: Starting Permit User Sessions... Oct 5 02:42:12 localhost systemd[1]: Started Notify NFS peers of a restart. Oct 5 02:42:12 localhost systemd[1]: Finished Permit User Sessions. Oct 5 02:42:12 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 02:42:12 localhost systemd[1]: Started Command Scheduler. Oct 5 02:42:12 localhost systemd[1]: Started Getty on tty1. Oct 5 02:42:12 localhost systemd[1]: Started Serial Getty on ttyS0. Oct 5 02:42:12 localhost systemd[1]: Reached target Login Prompts. Oct 5 02:42:12 localhost systemd[1]: Reached target Multi-User System. Oct 5 02:42:12 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Oct 5 02:42:12 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 5 02:42:12 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Oct 5 02:42:12 localhost sshd[1148]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost sshd[1158]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost sshd[1170]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost sshd[1178]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost sshd[1193]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost kdumpctl[1137]: kdump: No kdump initial ramdisk found. Oct 5 02:42:12 localhost kdumpctl[1137]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Oct 5 02:42:12 localhost sshd[1201]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost sshd[1207]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost sshd[1216]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:12 localhost cloud-init[1251]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sun, 05 Oct 2025 06:42:12 +0000. Up 11.56 seconds. Oct 5 02:42:12 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Oct 5 02:42:12 localhost systemd[1]: Starting Execute cloud user/final scripts... Oct 5 02:42:12 localhost cloud-init[1434]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sun, 05 Oct 2025 06:42:12 +0000. Up 11.92 seconds. Oct 5 02:42:12 localhost dracut[1438]: dracut-057-21.git20230214.el9 Oct 5 02:42:12 localhost cloud-init[1455]: ############################################################# Oct 5 02:42:12 localhost cloud-init[1456]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Oct 5 02:42:12 localhost cloud-init[1458]: 256 SHA256:Cj4J7VGp0ryHuaVo0n7w2gGbe+eff2w0V+hEaJGFFXI root@np0005471150.novalocal (ECDSA) Oct 5 02:42:12 localhost cloud-init[1460]: 256 SHA256:BiAer7vxWqMNPfZASigcJYtRWkdR1EFy2f0AGeNNtyE root@np0005471150.novalocal (ED25519) Oct 5 02:42:12 localhost cloud-init[1462]: 3072 SHA256:bpVaAtJQfqdhZHYTbRRP17BMI3DHYe/cZWWsgPCFu7E root@np0005471150.novalocal (RSA) Oct 5 02:42:12 localhost cloud-init[1463]: -----END SSH HOST KEY FINGERPRINTS----- Oct 5 02:42:12 localhost cloud-init[1464]: ############################################################# Oct 5 02:42:12 localhost cloud-init[1434]: Cloud-init v. 22.1-9.el9 finished at Sun, 05 Oct 2025 06:42:12 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 12.14 seconds Oct 5 02:42:12 localhost dracut[1440]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Oct 5 02:42:13 localhost systemd[1]: Reloading Network Manager... Oct 5 02:42:13 localhost NetworkManager[789]: [1759646533.0268] audit: op="reload" arg="0" pid=1525 uid=0 result="success" Oct 5 02:42:13 localhost NetworkManager[789]: [1759646533.0276] config: signal: SIGHUP (no changes from disk) Oct 5 02:42:13 localhost systemd[1]: Reloaded Network Manager. Oct 5 02:42:13 localhost systemd[1]: Finished Execute cloud user/final scripts. Oct 5 02:42:13 localhost systemd[1]: Reached target Cloud-init target. Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Oct 5 02:42:13 localhost dracut[1440]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Oct 5 02:42:13 localhost dracut[1440]: memstrack is not available Oct 5 02:42:13 localhost dracut[1440]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Oct 5 02:42:13 localhost dracut[1440]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Oct 5 02:42:13 localhost dracut[1440]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Oct 5 02:42:13 localhost dracut[1440]: memstrack is not available Oct 5 02:42:13 localhost dracut[1440]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Oct 5 02:42:14 localhost chronyd[766]: Selected source 149.56.19.163 (2.rhel.pool.ntp.org) Oct 5 02:42:14 localhost chronyd[766]: System clock TAI offset set to 37 seconds Oct 5 02:42:14 localhost dracut[1440]: *** Including module: systemd *** Oct 5 02:42:14 localhost dracut[1440]: *** Including module: systemd-initrd *** Oct 5 02:42:14 localhost dracut[1440]: *** Including module: i18n *** Oct 5 02:42:14 localhost dracut[1440]: No KEYMAP configured. Oct 5 02:42:14 localhost dracut[1440]: *** Including module: drm *** Oct 5 02:42:14 localhost dracut[1440]: *** Including module: prefixdevname *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: kernel-modules *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: kernel-modules-extra *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: qemu *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: fstab-sys *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: rootfs-block *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: terminfo *** Oct 5 02:42:15 localhost dracut[1440]: *** Including module: udev-rules *** Oct 5 02:42:16 localhost dracut[1440]: Skipping udev rule: 91-permissions.rules Oct 5 02:42:16 localhost dracut[1440]: Skipping udev rule: 80-drivers-modprobe.rules Oct 5 02:42:16 localhost dracut[1440]: *** Including module: virtiofs *** Oct 5 02:42:16 localhost dracut[1440]: *** Including module: dracut-systemd *** Oct 5 02:42:16 localhost dracut[1440]: *** Including module: usrmount *** Oct 5 02:42:16 localhost dracut[1440]: *** Including module: base *** Oct 5 02:42:16 localhost dracut[1440]: *** Including module: fs-lib *** Oct 5 02:42:16 localhost dracut[1440]: *** Including module: kdumpbase *** Oct 5 02:42:16 localhost dracut[1440]: *** Including module: microcode_ctl-fw_dir_override *** Oct 5 02:42:16 localhost dracut[1440]: microcode_ctl module: mangling fw_dir Oct 5 02:42:16 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-2d-07" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-4e-03" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-4f-01" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-55-04" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-5e-03" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-8c-01" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Oct 5 02:42:17 localhost dracut[1440]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Oct 5 02:42:17 localhost dracut[1440]: *** Including module: shutdown *** Oct 5 02:42:17 localhost dracut[1440]: *** Including module: squash *** Oct 5 02:42:17 localhost dracut[1440]: *** Including modules done *** Oct 5 02:42:17 localhost dracut[1440]: *** Installing kernel module dependencies *** Oct 5 02:42:17 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 02:42:18 localhost dracut[1440]: *** Installing kernel module dependencies done *** Oct 5 02:42:18 localhost dracut[1440]: *** Resolving executable dependencies *** Oct 5 02:42:19 localhost dracut[1440]: *** Resolving executable dependencies done *** Oct 5 02:42:19 localhost dracut[1440]: *** Hardlinking files *** Oct 5 02:42:19 localhost dracut[1440]: Mode: real Oct 5 02:42:19 localhost dracut[1440]: Files: 1099 Oct 5 02:42:19 localhost dracut[1440]: Linked: 3 files Oct 5 02:42:19 localhost dracut[1440]: Compared: 0 xattrs Oct 5 02:42:19 localhost dracut[1440]: Compared: 373 files Oct 5 02:42:19 localhost dracut[1440]: Saved: 61.04 KiB Oct 5 02:42:19 localhost dracut[1440]: Duration: 0.052352 seconds Oct 5 02:42:19 localhost dracut[1440]: *** Hardlinking files done *** Oct 5 02:42:19 localhost dracut[1440]: Could not find 'strip'. Not stripping the initramfs. Oct 5 02:42:19 localhost dracut[1440]: *** Generating early-microcode cpio image *** Oct 5 02:42:19 localhost dracut[1440]: *** Constructing AuthenticAMD.bin *** Oct 5 02:42:19 localhost dracut[1440]: *** Store current command line parameters *** Oct 5 02:42:19 localhost dracut[1440]: Stored kernel commandline: Oct 5 02:42:19 localhost dracut[1440]: No dracut internal kernel commandline stored in the initramfs Oct 5 02:42:19 localhost dracut[1440]: *** Install squash loader *** Oct 5 02:42:20 localhost dracut[1440]: *** Squashing the files inside the initramfs *** Oct 5 02:42:21 localhost dracut[1440]: *** Squashing the files inside the initramfs done *** Oct 5 02:42:21 localhost dracut[1440]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Oct 5 02:42:21 localhost dracut[1440]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Oct 5 02:42:22 localhost kdumpctl[1137]: kdump: kexec: loaded kdump kernel Oct 5 02:42:22 localhost kdumpctl[1137]: kdump: Starting kdump: [OK] Oct 5 02:42:22 localhost systemd[1]: Finished Crash recovery kernel arming. Oct 5 02:42:22 localhost systemd[1]: Startup finished in 1.255s (kernel) + 2.026s (initrd) + 18.388s (userspace) = 21.669s. Oct 5 02:42:37 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 02:42:55 localhost sshd[4176]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:44:02 localhost sshd[4179]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:44:02 localhost systemd-logind[760]: New session 1 of user zuul. Oct 5 02:44:02 localhost systemd[1]: Created slice User Slice of UID 1000. Oct 5 02:44:03 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Oct 5 02:44:03 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Oct 5 02:44:03 localhost systemd[1]: Starting User Manager for UID 1000... Oct 5 02:44:03 localhost systemd[4183]: Queued start job for default target Main User Target. Oct 5 02:44:03 localhost systemd[4183]: Created slice User Application Slice. Oct 5 02:44:03 localhost systemd[4183]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 02:44:03 localhost systemd[4183]: Started Daily Cleanup of User's Temporary Directories. Oct 5 02:44:03 localhost systemd[4183]: Reached target Paths. Oct 5 02:44:03 localhost systemd[4183]: Reached target Timers. Oct 5 02:44:03 localhost systemd[4183]: Starting D-Bus User Message Bus Socket... Oct 5 02:44:03 localhost systemd[4183]: Starting Create User's Volatile Files and Directories... Oct 5 02:44:03 localhost systemd[4183]: Listening on D-Bus User Message Bus Socket. Oct 5 02:44:03 localhost systemd[4183]: Reached target Sockets. Oct 5 02:44:03 localhost systemd[4183]: Finished Create User's Volatile Files and Directories. Oct 5 02:44:03 localhost systemd[4183]: Reached target Basic System. Oct 5 02:44:03 localhost systemd[4183]: Reached target Main User Target. Oct 5 02:44:03 localhost systemd[4183]: Startup finished in 114ms. Oct 5 02:44:03 localhost systemd[1]: Started User Manager for UID 1000. Oct 5 02:44:03 localhost systemd[1]: Started Session 1 of User zuul. Oct 5 02:44:03 localhost python3[4235]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:12 localhost python3[4254]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:21 localhost python3[4307]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:22 localhost python3[4337]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Oct 5 02:44:25 localhost python3[4353]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:26 localhost python3[4367]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:27 localhost python3[4426]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:44:28 localhost python3[4467]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759646667.6378045-388-165761550075291/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa follow=False checksum=cf09eb456a314382f639138519dc421f9df58c1f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:29 localhost python3[4540]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:44:30 localhost python3[4581]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759646669.397324-487-147801797351407/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa.pub follow=False checksum=eb73baa214aed5877413178ed76ec0f476520beb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:31 localhost python3[4609]: ansible-ping Invoked with data=pong Oct 5 02:44:34 localhost python3[4623]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:38 localhost python3[4676]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Oct 5 02:44:41 localhost python3[4698]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:41 localhost python3[4712]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:41 localhost python3[4726]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:42 localhost python3[4740]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:43 localhost python3[4754]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:43 localhost python3[4768]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:46 localhost python3[4784]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:47 localhost python3[4833]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:44:48 localhost python3[4876]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759646687.6401157-98-147464027837549/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:55 localhost python3[4904]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4932]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4946]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[4974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[4988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[5002]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[5016]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5044]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5058]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5072]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5086]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5100]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5114]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5128]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5142]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5156]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5170]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5184]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5198]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5212]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5226]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5240]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:02 localhost python3[5254]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:03 localhost python3[5270]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Oct 5 02:45:03 localhost systemd[1]: Starting Time & Date Service... Oct 5 02:45:03 localhost systemd[1]: Started Time & Date Service. Oct 5 02:45:03 localhost systemd-timedated[5272]: Changed time zone to 'UTC' (UTC). Oct 5 02:45:04 localhost python3[5291]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:05 localhost python3[5337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:06 localhost python3[5378]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759646705.6142244-492-257475766286986/source _original_basename=tmp__mnje78 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:07 localhost python3[5438]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:07 localhost python3[5479]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759646707.1581106-581-57120676963729/source _original_basename=tmpg131n6h6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:09 localhost python3[5541]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:09 localhost python3[5584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759646709.214921-725-5229132197989/source _original_basename=tmpz31u16el follow=False checksum=bd04c4e2bbffb15439bf671f57e577cfe66c7fe6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:11 localhost python3[5612]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:45:11 localhost python3[5628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:45:12 localhost python3[5678]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:12 localhost python3[5721]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759646712.309745-851-218314406381948/source _original_basename=tmpo0g22oaw follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:14 localhost python3[5752]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-4bc2-3529-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:45:25 localhost python3[5770]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4bc2-3529-000000000024-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Oct 5 02:45:33 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 02:45:37 localhost python3[5791]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:55 localhost python3[5807]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:46:10 localhost systemd[4183]: Starting Mark boot as successful... Oct 5 02:46:10 localhost systemd[4183]: Finished Mark boot as successful. Oct 5 02:46:55 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Oct 5 02:47:28 localhost systemd[1]: Unmounting EFI System Partition Automount... Oct 5 02:47:28 localhost systemd[1]: efi.mount: Deactivated successfully. Oct 5 02:47:28 localhost systemd[1]: Unmounted EFI System Partition Automount. Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Oct 5 02:49:03 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Oct 5 02:49:03 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3584] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Oct 5 02:49:03 localhost systemd-udevd[5816]: Network interface NamePolicy= disabled on kernel command line. Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3722] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Oct 5 02:49:03 localhost systemd[4183]: Created slice User Background Tasks Slice. Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3755] settings: (eth1): created default wired connection 'Wired connection 1' Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3763] device (eth1): carrier: link connected Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3768] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Oct 5 02:49:03 localhost systemd[4183]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3777] policy: auto-activating connection 'Wired connection 1' (a18906bd-276c-3e4c-96c4-a02aced66c46) Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3785] device (eth1): Activation: starting connection 'Wired connection 1' (a18906bd-276c-3e4c-96c4-a02aced66c46) Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3788] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3796] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3804] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Oct 5 02:49:03 localhost NetworkManager[789]: [1759646943.3810] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:49:03 localhost systemd[4183]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 02:49:04 localhost sshd[5820]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:49:04 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Oct 5 02:49:04 localhost systemd-logind[760]: New session 3 of user zuul. Oct 5 02:49:04 localhost systemd[1]: Started Session 3 of User zuul. Oct 5 02:49:04 localhost python3[5837]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-18e8-600c-00000000039b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:49:15 localhost sshd[5840]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:49:17 localhost python3[5889]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:49:18 localhost python3[5932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759646957.5178418-435-158924140294787/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e76afabdf2e10d2c3426844e6371068cf537a275 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:49:18 localhost python3[5962]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 02:49:18 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Oct 5 02:49:18 localhost systemd[1]: Stopped Network Manager Wait Online. Oct 5 02:49:18 localhost systemd[1]: Stopping Network Manager Wait Online... Oct 5 02:49:18 localhost systemd[1]: Stopping Network Manager... Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7675] caught SIGTERM, shutting down normally. Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7775] dhcp4 (eth0): canceled DHCP transaction Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7775] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7775] dhcp4 (eth0): state changed no lease Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7783] manager: NetworkManager state is now CONNECTING Oct 5 02:49:18 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7895] dhcp4 (eth1): canceled DHCP transaction Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.7896] dhcp4 (eth1): state changed no lease Oct 5 02:49:18 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 02:49:18 localhost NetworkManager[789]: [1759646958.8262] exiting (success) Oct 5 02:49:18 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Oct 5 02:49:18 localhost systemd[1]: Stopped Network Manager. Oct 5 02:49:18 localhost systemd[1]: NetworkManager.service: Consumed 1.883s CPU time. Oct 5 02:49:18 localhost systemd[1]: Starting Network Manager... Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.8855] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:9b88e86c-1998-48d6-be72-b09955076fa0) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.8856] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.8877] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Oct 5 02:49:18 localhost systemd[1]: Started Network Manager. Oct 5 02:49:18 localhost systemd[1]: Starting Network Manager Wait Online... Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.8932] manager[0x564a96dbb090]: monitoring kernel firmware directory '/lib/firmware'. Oct 5 02:49:18 localhost systemd[1]: Starting Hostname Service... Oct 5 02:49:18 localhost systemd[1]: Started Hostname Service. Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9794] hostname: hostname: using hostnamed Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9795] hostname: static hostname changed from (none) to "np0005471150.novalocal" Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9801] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9808] manager[0x564a96dbb090]: rfkill: Wi-Fi hardware radio set enabled Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9808] manager[0x564a96dbb090]: rfkill: WWAN hardware radio set enabled Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9848] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9849] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9850] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9851] manager: Networking is enabled by state file Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9860] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9860] settings: Loaded settings plugin: keyfile (internal) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9913] dhcp: init: Using DHCP client 'internal' Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9919] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9930] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9941] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9958] device (lo): Activation: starting connection 'lo' (36a94491-7b9f-4c98-bdc7-19c743303238) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9970] device (eth0): carrier: link connected Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9976] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9987] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Oct 5 02:49:18 localhost NetworkManager[5981]: [1759646958.9988] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0001] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0013] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0023] device (eth1): carrier: link connected Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0029] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0038] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (a18906bd-276c-3e4c-96c4-a02aced66c46) (indicated) Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0039] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0047] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0060] device (eth1): Activation: starting connection 'Wired connection 1' (a18906bd-276c-3e4c-96c4-a02aced66c46) Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0090] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0095] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0098] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0102] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0108] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0113] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0117] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0120] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0133] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0139] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0151] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0156] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0205] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0213] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0222] device (lo): Activation: successful, device activated. Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0232] dhcp4 (eth0): state changed new lease, address=38.102.83.156 Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0239] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0372] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0409] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0411] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0416] manager: NetworkManager state is now CONNECTED_SITE Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0421] device (eth0): Activation: successful, device activated. Oct 5 02:49:19 localhost NetworkManager[5981]: [1759646959.0430] manager: NetworkManager state is now CONNECTED_GLOBAL Oct 5 02:49:19 localhost python3[6036]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-18e8-600c-000000000120-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:49:29 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 02:49:48 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 02:50:03 localhost NetworkManager[5981]: [1759647003.7766] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:03 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 02:50:03 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 02:50:03 localhost NetworkManager[5981]: [1759647003.8015] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:03 localhost NetworkManager[5981]: [1759647003.8020] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:03 localhost NetworkManager[5981]: [1759647003.8034] device (eth1): Activation: successful, device activated. Oct 5 02:50:03 localhost NetworkManager[5981]: [1759647003.8047] manager: startup complete Oct 5 02:50:03 localhost systemd[1]: Finished Network Manager Wait Online. Oct 5 02:50:13 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 02:50:19 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Oct 5 02:50:19 localhost systemd[1]: session-3.scope: Deactivated successfully. Oct 5 02:50:19 localhost systemd[1]: session-3.scope: Consumed 1.434s CPU time. Oct 5 02:50:19 localhost systemd-logind[760]: Removed session 3. Oct 5 02:51:26 localhost sshd[6063]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:51:26 localhost systemd-logind[760]: New session 4 of user zuul. Oct 5 02:51:26 localhost systemd[1]: Started Session 4 of User zuul. Oct 5 02:51:26 localhost python3[6114]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:51:26 localhost python3[6157]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759647086.2509193-628-42040348651810/source _original_basename=tmpdgsc03gg follow=False checksum=0e431f410d06a0c53c5dd3a865f2812ab817026e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:51:31 localhost systemd[1]: session-4.scope: Deactivated successfully. Oct 5 02:51:31 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Oct 5 02:51:31 localhost systemd-logind[760]: Removed session 4. Oct 5 02:55:28 localhost sshd[6174]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:57:10 localhost systemd[1]: Starting Cleanup of Temporary Directories... Oct 5 02:57:10 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Oct 5 02:57:10 localhost systemd[1]: Finished Cleanup of Temporary Directories. Oct 5 02:57:10 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Oct 5 02:57:39 localhost sshd[6180]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:57:39 localhost systemd-logind[760]: New session 5 of user zuul. Oct 5 02:57:39 localhost systemd[1]: Started Session 5 of User zuul. Oct 5 02:57:39 localhost python3[6199]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-183a-2b8b-000000001d22-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:41 localhost python3[6218]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:42 localhost python3[6234]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:42 localhost python3[6250]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:42 localhost python3[6266]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:43 localhost python3[6282]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:43 localhost python3[6282]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually Oct 5 02:57:45 localhost python3[6298]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 02:57:45 localhost systemd[1]: Reloading. Oct 5 02:57:45 localhost systemd-rc-local-generator[6319]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 02:57:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 02:57:46 localhost python3[6345]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Oct 5 02:57:48 localhost python3[6361]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:48 localhost python3[6379]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:48 localhost python3[6397]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:48 localhost python3[6415]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:50 localhost python3[6432]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-183a-2b8b-000000001d28-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:50 localhost python3[6452]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 02:57:53 localhost systemd[1]: session-5.scope: Deactivated successfully. Oct 5 02:57:53 localhost systemd[1]: session-5.scope: Consumed 3.173s CPU time. Oct 5 02:57:53 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Oct 5 02:57:53 localhost systemd-logind[760]: Removed session 5. Oct 5 02:59:06 localhost sshd[6461]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:59:07 localhost systemd-logind[760]: New session 6 of user zuul. Oct 5 02:59:07 localhost systemd[1]: Started Session 6 of User zuul. Oct 5 02:59:07 localhost systemd[1]: Starting RHSM dbus service... Oct 5 02:59:08 localhost systemd[1]: Started RHSM dbus service. Oct 5 02:59:08 localhost rhsm-service[6485]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:08 localhost rhsm-service[6485]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:08 localhost rhsm-service[6485]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:08 localhost rhsm-service[6485]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:09 localhost rhsm-service[6485]: INFO [subscription_manager.managerlib:90] Consumer created: np0005471150.novalocal (389ffb6f-80ba-4204-ad63-11cd8b0f11fc) Oct 5 02:59:09 localhost subscription-manager[6485]: Registered system with identity: 389ffb6f-80ba-4204-ad63-11cd8b0f11fc Oct 5 02:59:09 localhost rhsm-service[6485]: INFO [subscription_manager.entcertlib:131] certs updated: Oct 5 02:59:09 localhost rhsm-service[6485]: Total updates: 1 Oct 5 02:59:09 localhost rhsm-service[6485]: Found (local) serial# [] Oct 5 02:59:09 localhost rhsm-service[6485]: Expected (UEP) serial# [1981485633762690576] Oct 5 02:59:09 localhost rhsm-service[6485]: Added (new) Oct 5 02:59:09 localhost rhsm-service[6485]: [sn:1981485633762690576 ( Content Access,) @ /etc/pki/entitlement/1981485633762690576.pem] Oct 5 02:59:09 localhost rhsm-service[6485]: Deleted (rogue): Oct 5 02:59:09 localhost rhsm-service[6485]: Oct 5 02:59:09 localhost subscription-manager[6485]: Added subscription for 'Content Access' contract 'None' Oct 5 02:59:09 localhost subscription-manager[6485]: Added subscription for product ' Content Access' Oct 5 02:59:10 localhost rhsm-service[6485]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:10 localhost rhsm-service[6485]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:10 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:11 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:11 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:11 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:11 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:21 localhost python3[6576]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163efc-24cc-069e-d673-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:59:22 localhost python3[6595]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 02:59:52 localhost setsebool[6670]: The virt_use_nfs policy boolean was changed to 1 by root Oct 5 02:59:52 localhost setsebool[6670]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Oct 5 03:00:00 localhost kernel: SELinux: Converting 407 SID table entries... Oct 5 03:00:00 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:00:00 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:00:00 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:00:00 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:00:00 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:00:00 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:00:00 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:00:13 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=3 res=1 Oct 5 03:00:13 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:00:13 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:00:13 localhost systemd[1]: Reloading. Oct 5 03:00:13 localhost systemd-rc-local-generator[7525]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:00:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:00:13 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:00:19 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:00:22 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:00:22 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:00:22 localhost systemd[1]: man-db-cache-update.service: Consumed 10.466s CPU time. Oct 5 03:00:22 localhost systemd[1]: run-rc9c6435ecb724391a6a23324588b0d7e.service: Deactivated successfully. Oct 5 03:01:14 localhost systemd[1]: session-6.scope: Deactivated successfully. Oct 5 03:01:14 localhost systemd[1]: session-6.scope: Consumed 48.530s CPU time. Oct 5 03:01:14 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Oct 5 03:01:14 localhost systemd-logind[760]: Removed session 6. Oct 5 03:01:28 localhost sshd[18330]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:28 localhost systemd-logind[760]: New session 7 of user zuul. Oct 5 03:01:28 localhost systemd[1]: Started Session 7 of User zuul. Oct 5 03:01:29 localhost podman[18350]: 2025-10-05 07:01:29.038639086 +0000 UTC m=+0.104247403 system refresh Oct 5 03:01:29 localhost systemd[4183]: Starting D-Bus User Message Bus... Oct 5 03:01:29 localhost dbus-broker-launch[18408]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Oct 5 03:01:29 localhost dbus-broker-launch[18408]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Oct 5 03:01:29 localhost systemd[4183]: Started D-Bus User Message Bus. Oct 5 03:01:29 localhost journal[18408]: Ready Oct 5 03:01:29 localhost systemd[4183]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Oct 5 03:01:29 localhost systemd[4183]: Created slice Slice /user. Oct 5 03:01:29 localhost systemd[4183]: podman-18391.scope: unit configures an IP firewall, but not running as root. Oct 5 03:01:29 localhost systemd[4183]: (This warning is only shown for the first unit using IP firewalling.) Oct 5 03:01:29 localhost systemd[4183]: Started podman-18391.scope. Oct 5 03:01:30 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:01:30 localhost systemd[4183]: Started podman-pause-a4029bce.scope. Oct 5 03:01:32 localhost systemd[1]: session-7.scope: Deactivated successfully. Oct 5 03:01:32 localhost systemd[1]: session-7.scope: Consumed 1.132s CPU time. Oct 5 03:01:32 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Oct 5 03:01:32 localhost systemd-logind[760]: Removed session 7. Oct 5 03:01:44 localhost sshd[18411]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18413]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18417]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18415]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18414]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18416]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:52 localhost sshd[18423]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:52 localhost systemd-logind[760]: New session 8 of user zuul. Oct 5 03:01:52 localhost systemd[1]: Started Session 8 of User zuul. Oct 5 03:01:52 localhost python3[18440]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP87f0TJ5B+7awiBdUKl49eWXdyOF8cjINJofgf8ukEJzb/lAYySAOznll5JFt00uw/yZng5hSo6312SA6R3VqM= zuul@np0005471143.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:01:53 localhost python3[18456]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP87f0TJ5B+7awiBdUKl49eWXdyOF8cjINJofgf8ukEJzb/lAYySAOznll5JFt00uw/yZng5hSo6312SA6R3VqM= zuul@np0005471143.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:01:54 localhost systemd[1]: session-8.scope: Deactivated successfully. Oct 5 03:01:54 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Oct 5 03:01:54 localhost systemd-logind[760]: Removed session 8. Oct 5 03:03:18 localhost sshd[18459]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:03:18 localhost systemd-logind[760]: New session 9 of user zuul. Oct 5 03:03:18 localhost systemd[1]: Started Session 9 of User zuul. Oct 5 03:03:18 localhost python3[18478]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:03:19 localhost python3[18494]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471150.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 5 03:03:21 localhost python3[18544]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:21 localhost python3[18587]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759647800.8096373-133-113770141598693/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa follow=False checksum=cf09eb456a314382f639138519dc421f9df58c1f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:22 localhost python3[18649]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:23 localhost python3[18692]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759647802.5080514-222-216495058626902/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa.pub follow=False checksum=eb73baa214aed5877413178ed76ec0f476520beb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:25 localhost python3[18722]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:26 localhost python3[18768]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:26 localhost python3[18784]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpwmfu7eiq recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:27 localhost python3[18844]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:28 localhost python3[18860]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmppz0ezegv recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:29 localhost python3[18920]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:29 localhost python3[18936]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpw33z2ioz recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:30 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Oct 5 03:03:30 localhost systemd[1]: session-9.scope: Deactivated successfully. Oct 5 03:03:30 localhost systemd[1]: session-9.scope: Consumed 3.611s CPU time. Oct 5 03:03:30 localhost systemd-logind[760]: Removed session 9. Oct 5 03:05:33 localhost sshd[18953]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:05:33 localhost systemd-logind[760]: New session 10 of user zuul. Oct 5 03:05:33 localhost systemd[1]: Started Session 10 of User zuul. Oct 5 03:05:33 localhost python3[18999]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:08:00 localhost sshd[19002]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:10:33 localhost systemd[1]: session-10.scope: Deactivated successfully. Oct 5 03:10:33 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Oct 5 03:10:33 localhost systemd-logind[760]: Removed session 10. Oct 5 03:11:36 localhost sshd[19007]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:13:17 localhost sshd[19009]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:13:19 localhost sshd[19010]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:13:58 localhost sshd[19012]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:15:49 localhost sshd[19016]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:15:49 localhost systemd-logind[760]: New session 11 of user zuul. Oct 5 03:15:49 localhost systemd[1]: Started Session 11 of User zuul. Oct 5 03:15:50 localhost python3[19033]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163efc-24cc-4d81-b09d-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:15:51 localhost python3[19053]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163efc-24cc-4d81-b09d-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:15:56 localhost python3[19072]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Oct 5 03:15:59 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:15:59 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:16:53 localhost python3[19231]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Oct 5 03:16:55 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:04 localhost python3[19371]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Oct 5 03:17:07 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:12 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:12 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:35 localhost python3[19705]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Oct 5 03:17:37 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:37 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:43 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:06 localhost python3[19982]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Oct 5 03:18:09 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:09 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:14 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:14 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:37 localhost python3[20320]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000013-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:18:44 localhost python3[20339]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:18:55 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Oct 5 03:19:03 localhost kernel: SELinux: Converting 498 SID table entries... Oct 5 03:19:03 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:19:03 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:19:03 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:19:03 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:19:03 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:19:03 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:19:03 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:19:07 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=4 res=1 Oct 5 03:19:07 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:19:07 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:19:07 localhost systemd[1]: Reloading. Oct 5 03:19:07 localhost systemd-rc-local-generator[20982]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:19:07 localhost systemd-sysv-generator[20986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:19:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:19:07 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:19:08 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:19:08 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:19:08 localhost systemd[1]: run-ra5def8332e2545b5a70462f95a40190b.service: Deactivated successfully. Oct 5 03:19:08 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:19:08 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:19:22 localhost sshd[21764]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:19:34 localhost python3[21781]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000015-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:19:59 localhost python3[21801]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:20:00 localhost python3[21849]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:20:01 localhost python3[21892]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759648800.1474617-291-68343859091460/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=3358dfc6c6ce646155135d0cad900026cb34ba08 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:20:03 localhost python3[21922]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:03 localhost systemd-journald[619]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 91.6 (305 of 333 items), suggesting rotation. Oct 5 03:20:03 localhost systemd-journald[619]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 03:20:03 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:20:03 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:20:03 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:20:03 localhost python3[21943]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:03 localhost python3[21963]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:03 localhost python3[21983]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:04 localhost python3[22003]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:07 localhost python3[22023]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:20:07 localhost systemd[1]: Starting LSB: Bring up/down networking... Oct 5 03:20:07 localhost network[22026]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 03:20:07 localhost network[22037]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 03:20:07 localhost network[22026]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:07 localhost network[22038]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:07 localhost network[22026]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 5 03:20:07 localhost network[22039]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 03:20:07 localhost NetworkManager[5981]: [1759648807.7152] audit: op="connections-reload" pid=22067 uid=0 result="success" Oct 5 03:20:07 localhost network[22026]: Bringing up loopback interface: [ OK ] Oct 5 03:20:07 localhost NetworkManager[5981]: [1759648807.9088] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22155 uid=0 result="success" Oct 5 03:20:07 localhost network[22026]: Bringing up interface eth0: [ OK ] Oct 5 03:20:07 localhost systemd[1]: Started LSB: Bring up/down networking. Oct 5 03:20:08 localhost python3[22196]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:20:08 localhost systemd[1]: Starting Open vSwitch Database Unit... Oct 5 03:20:08 localhost chown[22200]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Oct 5 03:20:08 localhost ovs-ctl[22205]: /etc/openvswitch/conf.db does not exist ... (warning). Oct 5 03:20:08 localhost ovs-ctl[22205]: Creating empty database /etc/openvswitch/conf.db [ OK ] Oct 5 03:20:08 localhost ovs-ctl[22205]: Starting ovsdb-server [ OK ] Oct 5 03:20:08 localhost ovs-vsctl[22254]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Oct 5 03:20:08 localhost ovs-vsctl[22274]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-110.el9fdp "external-ids:system-id=\"3b30d637-702a-429f-9027-888244ff6474\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Oct 5 03:20:08 localhost ovs-ctl[22205]: Configuring Open vSwitch system IDs [ OK ] Oct 5 03:20:08 localhost ovs-vsctl[22280]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005471150.novalocal Oct 5 03:20:08 localhost ovs-ctl[22205]: Enabling remote OVSDB managers [ OK ] Oct 5 03:20:08 localhost systemd[1]: Started Open vSwitch Database Unit. Oct 5 03:20:08 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Oct 5 03:20:08 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Oct 5 03:20:08 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Oct 5 03:20:08 localhost kernel: openvswitch: Open vSwitch switching datapath Oct 5 03:20:08 localhost ovs-ctl[22324]: Inserting openvswitch module [ OK ] Oct 5 03:20:09 localhost ovs-ctl[22293]: Starting ovs-vswitchd [ OK ] Oct 5 03:20:09 localhost ovs-vsctl[22343]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005471150.novalocal Oct 5 03:20:09 localhost ovs-ctl[22293]: Enabling remote OVSDB managers [ OK ] Oct 5 03:20:09 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Oct 5 03:20:09 localhost systemd[1]: Starting Open vSwitch... Oct 5 03:20:09 localhost systemd[1]: Finished Open vSwitch. Oct 5 03:20:11 localhost python3[22361]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-00000000001a-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:20:12 localhost NetworkManager[5981]: [1759648812.6562] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22519 uid=0 result="success" Oct 5 03:20:12 localhost ifup[22520]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:12 localhost ifup[22521]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:12 localhost ifup[22522]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:12 localhost NetworkManager[5981]: [1759648812.6896] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22528 uid=0 result="success" Oct 5 03:20:12 localhost ovs-vsctl[22530]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:1f:49:af -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Oct 5 03:20:12 localhost NetworkManager[5981]: [1759648812.7193] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Oct 5 03:20:12 localhost kernel: device ovs-system entered promiscuous mode Oct 5 03:20:12 localhost kernel: Timeout policy base is empty Oct 5 03:20:12 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Oct 5 03:20:12 localhost systemd-udevd[22532]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:12 localhost kernel: device br-ex entered promiscuous mode Oct 5 03:20:12 localhost NetworkManager[5981]: [1759648812.7707] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Oct 5 03:20:12 localhost NetworkManager[5981]: [1759648812.8007] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22558 uid=0 result="success" Oct 5 03:20:12 localhost NetworkManager[5981]: [1759648812.8232] device (br-ex): carrier: link connected Oct 5 03:20:15 localhost NetworkManager[5981]: [1759648815.8824] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22587 uid=0 result="success" Oct 5 03:20:15 localhost NetworkManager[5981]: [1759648815.9313] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22602 uid=0 result="success" Oct 5 03:20:15 localhost NET[22627]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.0219] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.0305] dhcp4 (eth1): canceled DHCP transaction Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.0305] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.0305] dhcp4 (eth1): state changed no lease Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.0344] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22636 uid=0 result="success" Oct 5 03:20:16 localhost ifup[22637]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:16 localhost ifup[22638]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:16 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 03:20:16 localhost ifup[22640]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:16 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.0695] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22651 uid=0 result="success" Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.1198] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22664 uid=0 result="success" Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.1281] device (eth1): carrier: link connected Oct 5 03:20:16 localhost NetworkManager[5981]: [1759648816.1538] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22673 uid=0 result="success" Oct 5 03:20:16 localhost ipv6_wait_tentative[22685]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Oct 5 03:20:17 localhost ipv6_wait_tentative[22690]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Oct 5 03:20:17 localhost sshd[22692]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.2295] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22701 uid=0 result="success" Oct 5 03:20:18 localhost ovs-vsctl[22716]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Oct 5 03:20:18 localhost kernel: device eth1 entered promiscuous mode Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.3066] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22724 uid=0 result="success" Oct 5 03:20:18 localhost ifup[22725]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:18 localhost ifup[22726]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:18 localhost ifup[22727]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.3385] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22733 uid=0 result="success" Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.3817] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22743 uid=0 result="success" Oct 5 03:20:18 localhost ifup[22744]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:18 localhost ifup[22745]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:18 localhost ifup[22746]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.4127] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22752 uid=0 result="success" Oct 5 03:20:18 localhost ovs-vsctl[22755]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Oct 5 03:20:18 localhost kernel: device vlan22 entered promiscuous mode Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.4530] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Oct 5 03:20:18 localhost systemd-udevd[22757]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.4795] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22766 uid=0 result="success" Oct 5 03:20:18 localhost NetworkManager[5981]: [1759648818.5025] device (vlan22): carrier: link connected Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.5736] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22796 uid=0 result="success" Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.6276] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22811 uid=0 result="success" Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.6935] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22832 uid=0 result="success" Oct 5 03:20:21 localhost ifup[22833]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:21 localhost ifup[22834]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:21 localhost ifup[22835]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.7288] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22841 uid=0 result="success" Oct 5 03:20:21 localhost ovs-vsctl[22844]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Oct 5 03:20:21 localhost systemd-udevd[22846]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:21 localhost kernel: device vlan21 entered promiscuous mode Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.7964] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.8238] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22856 uid=0 result="success" Oct 5 03:20:21 localhost NetworkManager[5981]: [1759648821.8463] device (vlan21): carrier: link connected Oct 5 03:20:24 localhost NetworkManager[5981]: [1759648824.8995] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22886 uid=0 result="success" Oct 5 03:20:24 localhost NetworkManager[5981]: [1759648824.9435] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22901 uid=0 result="success" Oct 5 03:20:25 localhost NetworkManager[5981]: [1759648825.0022] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22922 uid=0 result="success" Oct 5 03:20:25 localhost ifup[22923]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:25 localhost ifup[22924]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:25 localhost ifup[22925]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:25 localhost NetworkManager[5981]: [1759648825.0342] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22931 uid=0 result="success" Oct 5 03:20:25 localhost ovs-vsctl[22934]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Oct 5 03:20:25 localhost kernel: device vlan44 entered promiscuous mode Oct 5 03:20:25 localhost systemd-udevd[22936]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:25 localhost NetworkManager[5981]: [1759648825.0690] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Oct 5 03:20:25 localhost NetworkManager[5981]: [1759648825.0954] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22946 uid=0 result="success" Oct 5 03:20:25 localhost NetworkManager[5981]: [1759648825.1142] device (vlan44): carrier: link connected Oct 5 03:20:26 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.1762] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22976 uid=0 result="success" Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.2238] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22991 uid=0 result="success" Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.2868] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23012 uid=0 result="success" Oct 5 03:20:28 localhost ifup[23013]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:28 localhost ifup[23014]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:28 localhost ifup[23015]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.3200] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23021 uid=0 result="success" Oct 5 03:20:28 localhost ovs-vsctl[23024]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.3593] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Oct 5 03:20:28 localhost kernel: device vlan20 entered promiscuous mode Oct 5 03:20:28 localhost systemd-udevd[23026]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.3857] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23036 uid=0 result="success" Oct 5 03:20:28 localhost NetworkManager[5981]: [1759648828.4069] device (vlan20): carrier: link connected Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.4590] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23066 uid=0 result="success" Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.5069] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23081 uid=0 result="success" Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.5589] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23102 uid=0 result="success" Oct 5 03:20:31 localhost ifup[23103]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:31 localhost ifup[23104]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:31 localhost ifup[23105]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.5841] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23111 uid=0 result="success" Oct 5 03:20:31 localhost ovs-vsctl[23114]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Oct 5 03:20:31 localhost kernel: device vlan23 entered promiscuous mode Oct 5 03:20:31 localhost systemd-udevd[23116]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.6211] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.6419] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23126 uid=0 result="success" Oct 5 03:20:31 localhost NetworkManager[5981]: [1759648831.6596] device (vlan23): carrier: link connected Oct 5 03:20:34 localhost NetworkManager[5981]: [1759648834.7125] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23156 uid=0 result="success" Oct 5 03:20:34 localhost NetworkManager[5981]: [1759648834.7572] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23171 uid=0 result="success" Oct 5 03:20:34 localhost NetworkManager[5981]: [1759648834.8147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23192 uid=0 result="success" Oct 5 03:20:34 localhost ifup[23193]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:34 localhost ifup[23194]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:34 localhost ifup[23195]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:34 localhost NetworkManager[5981]: [1759648834.8462] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23201 uid=0 result="success" Oct 5 03:20:34 localhost ovs-vsctl[23204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Oct 5 03:20:34 localhost NetworkManager[5981]: [1759648834.9010] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23211 uid=0 result="success" Oct 5 03:20:35 localhost NetworkManager[5981]: [1759648835.9566] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23238 uid=0 result="success" Oct 5 03:20:35 localhost NetworkManager[5981]: [1759648835.9985] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23253 uid=0 result="success" Oct 5 03:20:36 localhost NetworkManager[5981]: [1759648836.0501] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23274 uid=0 result="success" Oct 5 03:20:36 localhost ifup[23275]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:36 localhost ifup[23276]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:36 localhost ifup[23277]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:36 localhost NetworkManager[5981]: [1759648836.0803] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23283 uid=0 result="success" Oct 5 03:20:36 localhost ovs-vsctl[23286]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Oct 5 03:20:36 localhost NetworkManager[5981]: [1759648836.1429] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23293 uid=0 result="success" Oct 5 03:20:37 localhost NetworkManager[5981]: [1759648837.2071] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23321 uid=0 result="success" Oct 5 03:20:37 localhost NetworkManager[5981]: [1759648837.2572] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23336 uid=0 result="success" Oct 5 03:20:37 localhost NetworkManager[5981]: [1759648837.3196] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23357 uid=0 result="success" Oct 5 03:20:37 localhost ifup[23358]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:37 localhost ifup[23359]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:37 localhost ifup[23360]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:37 localhost NetworkManager[5981]: [1759648837.3520] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23366 uid=0 result="success" Oct 5 03:20:37 localhost ovs-vsctl[23369]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Oct 5 03:20:37 localhost NetworkManager[5981]: [1759648837.4096] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23376 uid=0 result="success" Oct 5 03:20:38 localhost NetworkManager[5981]: [1759648838.4692] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23404 uid=0 result="success" Oct 5 03:20:38 localhost NetworkManager[5981]: [1759648838.5181] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23419 uid=0 result="success" Oct 5 03:20:38 localhost NetworkManager[5981]: [1759648838.5829] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23440 uid=0 result="success" Oct 5 03:20:38 localhost ifup[23441]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:38 localhost ifup[23442]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:38 localhost ifup[23443]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:38 localhost NetworkManager[5981]: [1759648838.6178] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23449 uid=0 result="success" Oct 5 03:20:38 localhost ovs-vsctl[23452]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Oct 5 03:20:38 localhost NetworkManager[5981]: [1759648838.6827] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23459 uid=0 result="success" Oct 5 03:20:39 localhost NetworkManager[5981]: [1759648839.7474] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23487 uid=0 result="success" Oct 5 03:20:39 localhost NetworkManager[5981]: [1759648839.7919] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23502 uid=0 result="success" Oct 5 03:20:39 localhost NetworkManager[5981]: [1759648839.8479] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23523 uid=0 result="success" Oct 5 03:20:39 localhost ifup[23524]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:39 localhost ifup[23525]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:39 localhost ifup[23526]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:39 localhost NetworkManager[5981]: [1759648839.8777] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23532 uid=0 result="success" Oct 5 03:20:39 localhost ovs-vsctl[23535]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Oct 5 03:20:39 localhost NetworkManager[5981]: [1759648839.9357] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23542 uid=0 result="success" Oct 5 03:20:41 localhost NetworkManager[5981]: [1759648841.0006] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23570 uid=0 result="success" Oct 5 03:20:41 localhost NetworkManager[5981]: [1759648841.0495] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23585 uid=0 result="success" Oct 5 03:21:16 localhost sshd[23603]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:21:34 localhost python3[23619]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-00000000001b-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:21:40 localhost python3[23638]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:40 localhost python3[23654]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:42 localhost python3[23668]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:43 localhost python3[23684]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:44 localhost python3[23698]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Oct 5 03:21:45 localhost python3[23713]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005471150.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000022-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:21:45 localhost python3[23733]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:21:45 localhost systemd[1]: Starting Hostname Service... Oct 5 03:21:45 localhost systemd[1]: Started Hostname Service. Oct 5 03:21:45 localhost systemd-hostnamed[23737]: Hostname set to (static) Oct 5 03:21:45 localhost NetworkManager[5981]: [1759648905.9164] hostname: static hostname changed from "np0005471150.novalocal" to "np0005471150.localdomain" Oct 5 03:21:45 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 03:21:45 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 03:21:47 localhost systemd[1]: session-11.scope: Deactivated successfully. Oct 5 03:21:47 localhost systemd[1]: session-11.scope: Consumed 1min 43.214s CPU time. Oct 5 03:21:47 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Oct 5 03:21:47 localhost systemd-logind[760]: Removed session 11. Oct 5 03:21:50 localhost sshd[23748]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:21:50 localhost systemd-logind[760]: New session 12 of user zuul. Oct 5 03:21:50 localhost systemd[1]: Started Session 12 of User zuul. Oct 5 03:21:50 localhost python3[23765]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Oct 5 03:21:52 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Oct 5 03:21:52 localhost systemd[1]: session-12.scope: Deactivated successfully. Oct 5 03:21:52 localhost systemd-logind[760]: Removed session 12. Oct 5 03:21:55 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 03:22:15 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 03:22:31 localhost sshd[23769]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:22:32 localhost systemd-logind[760]: New session 13 of user zuul. Oct 5 03:22:32 localhost systemd[1]: Started Session 13 of User zuul. Oct 5 03:22:32 localhost python3[23788]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:22:35 localhost systemd[1]: Reloading. Oct 5 03:22:36 localhost systemd-rc-local-generator[23833]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:36 localhost systemd-sysv-generator[23837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:36 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Oct 5 03:22:36 localhost systemd[1]: Reloading. Oct 5 03:22:36 localhost systemd-sysv-generator[23878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:36 localhost systemd-rc-local-generator[23875]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:36 localhost systemd[1]: Starting dnf makecache... Oct 5 03:22:36 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Oct 5 03:22:36 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Oct 5 03:22:36 localhost systemd[1]: Reloading. Oct 5 03:22:36 localhost systemd-rc-local-generator[23906]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:36 localhost systemd-sysv-generator[23912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:36 localhost dnf[23885]: Updating Subscription Management repositories. Oct 5 03:22:36 localhost systemd[1]: Listening on LVM2 poll daemon socket. Oct 5 03:22:37 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:22:37 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:22:37 localhost systemd[1]: Reloading. Oct 5 03:22:37 localhost systemd-rc-local-generator[23987]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:37 localhost systemd-sysv-generator[23995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:37 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:22:37 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:22:37 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:22:37 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:22:37 localhost systemd[1]: run-r5607b5d52d504b56bad9d39604eb7051.service: Deactivated successfully. Oct 5 03:22:37 localhost systemd[1]: run-r5d4a37010dc34166807fadd7289c0828.service: Deactivated successfully. Oct 5 03:22:38 localhost dnf[23885]: Failed determining last makecache time. Oct 5 03:22:38 localhost dnf[23885]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 38 kB/s | 4.5 kB 00:00 Oct 5 03:22:39 localhost dnf[23885]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 31 kB/s | 4.1 kB 00:00 Oct 5 03:22:39 localhost dnf[23885]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 29 kB/s | 4.0 kB 00:00 Oct 5 03:22:39 localhost dnf[23885]: Fast Datapath for RHEL 9 x86_64 (RPMs) 30 kB/s | 4.0 kB 00:00 Oct 5 03:22:39 localhost dnf[23885]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 34 kB/s | 4.5 kB 00:00 Oct 5 03:22:39 localhost dnf[23885]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 31 kB/s | 4.1 kB 00:00 Oct 5 03:22:40 localhost dnf[23885]: Red Hat Enterprise Linux 9 for x86_64 - High Av 31 kB/s | 4.0 kB 00:00 Oct 5 03:22:40 localhost dnf[23885]: Metadata cache created. Oct 5 03:22:40 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 5 03:22:40 localhost systemd[1]: Finished dnf makecache. Oct 5 03:22:40 localhost systemd[1]: dnf-makecache.service: Consumed 3.004s CPU time. Oct 5 03:23:38 localhost systemd[1]: session-13.scope: Deactivated successfully. Oct 5 03:23:38 localhost systemd[1]: session-13.scope: Consumed 4.586s CPU time. Oct 5 03:23:38 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Oct 5 03:23:38 localhost systemd-logind[760]: Removed session 13. Oct 5 03:26:35 localhost sshd[24571]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:32:54 localhost sshd[24575]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:39:17 localhost sshd[24581]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:40:05 localhost sshd[24583]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:40:05 localhost systemd-logind[760]: New session 14 of user zuul. Oct 5 03:40:05 localhost systemd[1]: Started Session 14 of User zuul. Oct 5 03:40:06 localhost python3[24631]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:40:08 localhost python3[24718]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:40:11 localhost python3[24735]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:40:11 localhost python3[24751]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:12 localhost kernel: loop: module loaded Oct 5 03:40:12 localhost kernel: loop3: detected capacity change from 0 to 14680064 Oct 5 03:40:12 localhost python3[24776]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:12 localhost lvm[24779]: PV /dev/loop3 not used. Oct 5 03:40:12 localhost lvm[24781]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:40:12 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Oct 5 03:40:12 localhost lvm[24789]: 1 logical volume(s) in volume group "ceph_vg0" now active Oct 5 03:40:12 localhost lvm[24791]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:40:12 localhost lvm[24791]: VG ceph_vg0 finished Oct 5 03:40:12 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Oct 5 03:40:13 localhost python3[24840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:13 localhost python3[24883]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650012.998421-55305-266865856161878/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:14 localhost python3[24913]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:14 localhost systemd[1]: Reloading. Oct 5 03:40:14 localhost systemd-rc-local-generator[24937]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:14 localhost systemd-sysv-generator[24941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:15 localhost systemd[1]: Starting Ceph OSD losetup... Oct 5 03:40:15 localhost bash[24955]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img) Oct 5 03:40:15 localhost systemd[1]: Finished Ceph OSD losetup. Oct 5 03:40:15 localhost lvm[24956]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:40:15 localhost lvm[24956]: VG ceph_vg0 finished Oct 5 03:40:15 localhost python3[24972]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:40:18 localhost python3[24989]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:40:19 localhost python3[25005]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:19 localhost kernel: loop4: detected capacity change from 0 to 14680064 Oct 5 03:40:19 localhost python3[25027]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:19 localhost lvm[25030]: PV /dev/loop4 not used. Oct 5 03:40:19 localhost lvm[25040]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:40:19 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Oct 5 03:40:19 localhost lvm[25042]: 1 logical volume(s) in volume group "ceph_vg1" now active Oct 5 03:40:19 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Oct 5 03:40:20 localhost python3[25090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:20 localhost python3[25133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650020.2025607-55525-32161552401337/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:21 localhost python3[25163]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:21 localhost systemd[1]: Reloading. Oct 5 03:40:21 localhost systemd-rc-local-generator[25187]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:21 localhost systemd-sysv-generator[25191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:21 localhost systemd[1]: Starting Ceph OSD losetup... Oct 5 03:40:21 localhost bash[25204]: /dev/loop4: [64516]:8400144 (/var/lib/ceph-osd-1.img) Oct 5 03:40:21 localhost systemd[1]: Finished Ceph OSD losetup. Oct 5 03:40:21 localhost lvm[25205]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:40:21 localhost lvm[25205]: VG ceph_vg1 finished Oct 5 03:40:31 localhost python3[25251]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:40:32 localhost python3[25271]: ansible-hostname Invoked with name=np0005471150.localdomain use=None Oct 5 03:40:32 localhost systemd[1]: Starting Hostname Service... Oct 5 03:40:32 localhost systemd[1]: Started Hostname Service. Oct 5 03:40:36 localhost python3[25294]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Oct 5 03:40:36 localhost python3[25342]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.j78egc11tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:37 localhost python3[25372]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.j78egc11tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:37 localhost python3[25388]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.j78egc11tmphosts insertbefore=BOF block=192.168.122.106 np0005471150.localdomain np0005471150#012192.168.122.106 np0005471150.ctlplane.localdomain np0005471150.ctlplane#012192.168.122.107 np0005471151.localdomain np0005471151#012192.168.122.107 np0005471151.ctlplane.localdomain np0005471151.ctlplane#012192.168.122.108 np0005471152.localdomain np0005471152#012192.168.122.108 np0005471152.ctlplane.localdomain np0005471152.ctlplane#012192.168.122.103 np0005471146.localdomain np0005471146#012192.168.122.103 np0005471146.ctlplane.localdomain np0005471146.ctlplane#012192.168.122.104 np0005471147.localdomain np0005471147#012192.168.122.104 np0005471147.ctlplane.localdomain np0005471147.ctlplane#012192.168.122.105 np0005471148.localdomain np0005471148#012192.168.122.105 np0005471148.ctlplane.localdomain np0005471148.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:38 localhost python3[25404]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.j78egc11tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:38 localhost python3[25421]: ansible-file Invoked with path=/tmp/ansible.j78egc11tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:41 localhost python3[25437]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:42 localhost python3[25455]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:40:47 localhost python3[25504]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:47 localhost python3[25549]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650046.6153848-56331-234054400799797/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:48 localhost python3[25579]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:50 localhost python3[25597]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:40:50 localhost systemd[1]: Stopping NTP client/server... Oct 5 03:40:50 localhost chronyd[766]: chronyd exiting Oct 5 03:40:50 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 5 03:40:50 localhost systemd[1]: Stopped NTP client/server. Oct 5 03:40:50 localhost systemd[1]: chronyd.service: Consumed 97ms CPU time, read 1.9M from disk, written 0B to disk. Oct 5 03:40:50 localhost systemd[1]: Starting NTP client/server... Oct 5 03:40:50 localhost chronyd[25605]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 03:40:50 localhost chronyd[25605]: Frequency -26.443 +/- 0.170 ppm read from /var/lib/chrony/drift Oct 5 03:40:50 localhost chronyd[25605]: Loaded seccomp filter (level 2) Oct 5 03:40:50 localhost systemd[1]: Started NTP client/server. Oct 5 03:40:52 localhost python3[25654]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:52 localhost python3[25697]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650052.0447168-56729-102601980243033/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:53 localhost python3[25727]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:53 localhost systemd[1]: Reloading. Oct 5 03:40:53 localhost systemd-rc-local-generator[25747]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:53 localhost systemd-sysv-generator[25752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:53 localhost systemd[1]: Reloading. Oct 5 03:40:53 localhost systemd-sysv-generator[25796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:53 localhost systemd-rc-local-generator[25791]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:53 localhost systemd[1]: Starting chronyd online sources service... Oct 5 03:40:53 localhost chronyc[25803]: 200 OK Oct 5 03:40:53 localhost systemd[1]: chrony-online.service: Deactivated successfully. Oct 5 03:40:53 localhost systemd[1]: Finished chronyd online sources service. Oct 5 03:40:54 localhost python3[25819]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:54 localhost chronyd[25605]: System clock was stepped by 0.000000 seconds Oct 5 03:40:54 localhost chronyd[25605]: Selected source 216.128.178.20 (pool.ntp.org) Oct 5 03:40:54 localhost python3[25836]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:41:02 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 03:41:05 localhost python3[25857]: ansible-timezone Invoked with name=UTC hwclock=None Oct 5 03:41:05 localhost systemd[1]: Starting Time & Date Service... Oct 5 03:41:05 localhost systemd[1]: Started Time & Date Service. Oct 5 03:41:06 localhost python3[25877]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:41:07 localhost chronyd[25605]: chronyd exiting Oct 5 03:41:07 localhost systemd[1]: Stopping NTP client/server... Oct 5 03:41:07 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 5 03:41:07 localhost systemd[1]: Stopped NTP client/server. Oct 5 03:41:07 localhost systemd[1]: Starting NTP client/server... Oct 5 03:41:07 localhost chronyd[25884]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 03:41:07 localhost chronyd[25884]: Frequency -26.443 +/- 0.177 ppm read from /var/lib/chrony/drift Oct 5 03:41:07 localhost chronyd[25884]: Loaded seccomp filter (level 2) Oct 5 03:41:07 localhost systemd[1]: Started NTP client/server. Oct 5 03:41:13 localhost chronyd[25884]: Selected source 138.197.135.239 (pool.ntp.org) Oct 5 03:41:35 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 03:43:05 localhost sshd[26081]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:05 localhost systemd-logind[760]: New session 15 of user ceph-admin. Oct 5 03:43:05 localhost systemd[1]: Created slice User Slice of UID 1002. Oct 5 03:43:05 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Oct 5 03:43:05 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Oct 5 03:43:05 localhost systemd[1]: Starting User Manager for UID 1002... Oct 5 03:43:06 localhost sshd[26098]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:06 localhost systemd[26085]: Queued start job for default target Main User Target. Oct 5 03:43:06 localhost systemd[26085]: Created slice User Application Slice. Oct 5 03:43:06 localhost systemd[26085]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 03:43:06 localhost systemd[26085]: Started Daily Cleanup of User's Temporary Directories. Oct 5 03:43:06 localhost systemd[26085]: Reached target Paths. Oct 5 03:43:06 localhost systemd[26085]: Reached target Timers. Oct 5 03:43:06 localhost systemd[26085]: Starting D-Bus User Message Bus Socket... Oct 5 03:43:06 localhost systemd[26085]: Starting Create User's Volatile Files and Directories... Oct 5 03:43:06 localhost systemd[26085]: Listening on D-Bus User Message Bus Socket. Oct 5 03:43:06 localhost systemd[26085]: Reached target Sockets. Oct 5 03:43:06 localhost systemd[26085]: Finished Create User's Volatile Files and Directories. Oct 5 03:43:06 localhost systemd[26085]: Reached target Basic System. Oct 5 03:43:06 localhost systemd[26085]: Reached target Main User Target. Oct 5 03:43:06 localhost systemd[26085]: Startup finished in 117ms. Oct 5 03:43:06 localhost systemd[1]: Started User Manager for UID 1002. Oct 5 03:43:06 localhost systemd[1]: Started Session 15 of User ceph-admin. Oct 5 03:43:06 localhost systemd-logind[760]: New session 17 of user ceph-admin. Oct 5 03:43:06 localhost systemd[1]: Started Session 17 of User ceph-admin. Oct 5 03:43:06 localhost sshd[26120]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:06 localhost systemd-logind[760]: New session 18 of user ceph-admin. Oct 5 03:43:06 localhost systemd[1]: Started Session 18 of User ceph-admin. Oct 5 03:43:06 localhost sshd[26139]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:06 localhost systemd-logind[760]: New session 19 of user ceph-admin. Oct 5 03:43:06 localhost systemd[1]: Started Session 19 of User ceph-admin. Oct 5 03:43:07 localhost sshd[26158]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:07 localhost systemd-logind[760]: New session 20 of user ceph-admin. Oct 5 03:43:07 localhost systemd[1]: Started Session 20 of User ceph-admin. Oct 5 03:43:07 localhost sshd[26177]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:07 localhost systemd-logind[760]: New session 21 of user ceph-admin. Oct 5 03:43:07 localhost systemd[1]: Started Session 21 of User ceph-admin. Oct 5 03:43:08 localhost sshd[26196]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:08 localhost systemd-logind[760]: New session 22 of user ceph-admin. Oct 5 03:43:08 localhost systemd[1]: Started Session 22 of User ceph-admin. Oct 5 03:43:08 localhost sshd[26215]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:08 localhost systemd-logind[760]: New session 23 of user ceph-admin. Oct 5 03:43:08 localhost systemd[1]: Started Session 23 of User ceph-admin. Oct 5 03:43:08 localhost sshd[26234]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:08 localhost systemd-logind[760]: New session 24 of user ceph-admin. Oct 5 03:43:08 localhost systemd[1]: Started Session 24 of User ceph-admin. Oct 5 03:43:09 localhost sshd[26253]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:09 localhost systemd-logind[760]: New session 25 of user ceph-admin. Oct 5 03:43:09 localhost systemd[1]: Started Session 25 of User ceph-admin. Oct 5 03:43:09 localhost sshd[26270]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:09 localhost systemd-logind[760]: New session 26 of user ceph-admin. Oct 5 03:43:09 localhost systemd[1]: Started Session 26 of User ceph-admin. Oct 5 03:43:10 localhost sshd[26289]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:10 localhost systemd-logind[760]: New session 27 of user ceph-admin. Oct 5 03:43:10 localhost systemd[1]: Started Session 27 of User ceph-admin. Oct 5 03:43:10 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:41 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26503 (sysctl) Oct 5 03:43:41 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Oct 5 03:43:41 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Oct 5 03:43:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:42 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:45 localhost kernel: VFS: idmapped mount is not enabled. Oct 5 03:44:06 localhost podman[26640]: Oct 5 03:44:06 localhost podman[26640]: 2025-10-05 07:44:06.895945882 +0000 UTC m=+24.290780833 container create b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_davinci, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, version=7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container) Oct 5 03:44:06 localhost podman[26640]: 2025-10-05 07:43:42.648520946 +0000 UTC m=+0.043355937 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:06 localhost systemd[1]: Created slice Slice /machine. Oct 5 03:44:06 localhost systemd[1]: Started libpod-conmon-b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e.scope. Oct 5 03:44:06 localhost systemd[1]: Started libcrun container. Oct 5 03:44:06 localhost podman[26640]: 2025-10-05 07:44:06.990787694 +0000 UTC m=+24.385622665 container init b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_davinci, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:44:07 localhost podman[26640]: 2025-10-05 07:44:07.00332604 +0000 UTC m=+24.398161021 container start b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_davinci, release=553, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph) Oct 5 03:44:07 localhost podman[26640]: 2025-10-05 07:44:07.003638811 +0000 UTC m=+24.398473792 container attach b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_davinci, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=553, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7) Oct 5 03:44:07 localhost compassionate_davinci[26784]: 167 167 Oct 5 03:44:07 localhost systemd[1]: libpod-b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e.scope: Deactivated successfully. Oct 5 03:44:07 localhost podman[26640]: 2025-10-05 07:44:07.009340639 +0000 UTC m=+24.404175630 container died b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_davinci, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:44:07 localhost podman[26789]: 2025-10-05 07:44:07.0897339 +0000 UTC m=+0.073344308 container remove b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_davinci, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Oct 5 03:44:07 localhost systemd[1]: libpod-conmon-b5a7bb5eada95c124aa5eba39b6d39de05b34b9e05170b9e4f243bcaa52c9d0e.scope: Deactivated successfully. Oct 5 03:44:07 localhost podman[26811]: Oct 5 03:44:07 localhost podman[26811]: 2025-10-05 07:44:07.299913367 +0000 UTC m=+0.044338571 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:07 localhost podman[26811]: 2025-10-05 07:44:07.451144327 +0000 UTC m=+0.195569511 container create 3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_agnesi, RELEASE=main, io.buildah.version=1.33.12, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, build-date=2025-09-24T08:57:55) Oct 5 03:44:07 localhost systemd[1]: Started libpod-conmon-3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364.scope. Oct 5 03:44:07 localhost systemd[1]: Started libcrun container. Oct 5 03:44:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de3669c0f7e4567159721944521fb567a955cc3a377d8cc49acd53a0ea6fa12/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de3669c0f7e4567159721944521fb567a955cc3a377d8cc49acd53a0ea6fa12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:07 localhost systemd[1]: tmp-crun.tqWw6u.mount: Deactivated successfully. Oct 5 03:44:07 localhost systemd[1]: var-lib-containers-storage-overlay-f3fc95d5b44b942f5d84b4e8c176036f388734b0f2ed334c604a6107bf5fb81b-merged.mount: Deactivated successfully. Oct 5 03:44:12 localhost podman[26811]: 2025-10-05 07:44:12.305641409 +0000 UTC m=+5.050066583 container init 3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_agnesi, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, version=7) Oct 5 03:44:12 localhost podman[26811]: 2025-10-05 07:44:12.320810096 +0000 UTC m=+5.065235270 container start 3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, architecture=x86_64) Oct 5 03:44:12 localhost podman[26811]: 2025-10-05 07:44:12.321237611 +0000 UTC m=+5.065662825 container attach 3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_agnesi, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Oct 5 03:44:13 localhost happy_agnesi[26826]: [ Oct 5 03:44:13 localhost happy_agnesi[26826]: { Oct 5 03:44:13 localhost happy_agnesi[26826]: "available": false, Oct 5 03:44:13 localhost happy_agnesi[26826]: "ceph_device": false, Oct 5 03:44:13 localhost happy_agnesi[26826]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 03:44:13 localhost happy_agnesi[26826]: "lsm_data": {}, Oct 5 03:44:13 localhost happy_agnesi[26826]: "lvs": [], Oct 5 03:44:13 localhost happy_agnesi[26826]: "path": "/dev/sr0", Oct 5 03:44:13 localhost happy_agnesi[26826]: "rejected_reasons": [ Oct 5 03:44:13 localhost happy_agnesi[26826]: "Insufficient space (<5GB)", Oct 5 03:44:13 localhost happy_agnesi[26826]: "Has a FileSystem" Oct 5 03:44:13 localhost happy_agnesi[26826]: ], Oct 5 03:44:13 localhost happy_agnesi[26826]: "sys_api": { Oct 5 03:44:13 localhost happy_agnesi[26826]: "actuators": null, Oct 5 03:44:13 localhost happy_agnesi[26826]: "device_nodes": "sr0", Oct 5 03:44:13 localhost happy_agnesi[26826]: "human_readable_size": "482.00 KB", Oct 5 03:44:13 localhost happy_agnesi[26826]: "id_bus": "ata", Oct 5 03:44:13 localhost happy_agnesi[26826]: "model": "QEMU DVD-ROM", Oct 5 03:44:13 localhost happy_agnesi[26826]: "nr_requests": "2", Oct 5 03:44:13 localhost happy_agnesi[26826]: "partitions": {}, Oct 5 03:44:13 localhost happy_agnesi[26826]: "path": "/dev/sr0", Oct 5 03:44:13 localhost happy_agnesi[26826]: "removable": "1", Oct 5 03:44:13 localhost happy_agnesi[26826]: "rev": "2.5+", Oct 5 03:44:13 localhost happy_agnesi[26826]: "ro": "0", Oct 5 03:44:13 localhost happy_agnesi[26826]: "rotational": "1", Oct 5 03:44:13 localhost happy_agnesi[26826]: "sas_address": "", Oct 5 03:44:13 localhost happy_agnesi[26826]: "sas_device_handle": "", Oct 5 03:44:13 localhost happy_agnesi[26826]: "scheduler_mode": "mq-deadline", Oct 5 03:44:13 localhost happy_agnesi[26826]: "sectors": 0, Oct 5 03:44:13 localhost happy_agnesi[26826]: "sectorsize": "2048", Oct 5 03:44:13 localhost happy_agnesi[26826]: "size": 493568.0, Oct 5 03:44:13 localhost happy_agnesi[26826]: "support_discard": "0", Oct 5 03:44:13 localhost happy_agnesi[26826]: "type": "disk", Oct 5 03:44:13 localhost happy_agnesi[26826]: "vendor": "QEMU" Oct 5 03:44:13 localhost happy_agnesi[26826]: } Oct 5 03:44:13 localhost happy_agnesi[26826]: } Oct 5 03:44:13 localhost happy_agnesi[26826]: ] Oct 5 03:44:13 localhost systemd[1]: libpod-3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364.scope: Deactivated successfully. Oct 5 03:44:13 localhost podman[26811]: 2025-10-05 07:44:13.214974579 +0000 UTC m=+5.959399743 container died 3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 5 03:44:13 localhost systemd[1]: var-lib-containers-storage-overlay-4de3669c0f7e4567159721944521fb567a955cc3a377d8cc49acd53a0ea6fa12-merged.mount: Deactivated successfully. Oct 5 03:44:13 localhost podman[28213]: 2025-10-05 07:44:13.309098287 +0000 UTC m=+0.084193234 container remove 3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7) Oct 5 03:44:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:13 localhost systemd[1]: libpod-conmon-3c61ebb4c7488a0a9053eb12ed37319ecce6fc86c2c8e890ab56bdc96f789364.scope: Deactivated successfully. Oct 5 03:44:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:13 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Oct 5 03:44:13 localhost systemd[1]: Closed Process Core Dump Socket. Oct 5 03:44:13 localhost systemd[1]: Stopping Process Core Dump Socket... Oct 5 03:44:13 localhost systemd[1]: Listening on Process Core Dump Socket. Oct 5 03:44:13 localhost systemd[1]: Reloading. Oct 5 03:44:14 localhost systemd-rc-local-generator[28295]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:14 localhost systemd-sysv-generator[28298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:14 localhost systemd[1]: Reloading. Oct 5 03:44:14 localhost systemd-sysv-generator[28339]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:14 localhost systemd-rc-local-generator[28336]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:34 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:34 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:34 localhost podman[28416]: Oct 5 03:44:34 localhost podman[28416]: 2025-10-05 07:44:34.610199669 +0000 UTC m=+0.073798963 container create c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Oct 5 03:44:34 localhost systemd[1]: Started libpod-conmon-c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32.scope. Oct 5 03:44:34 localhost systemd[1]: Started libcrun container. Oct 5 03:44:34 localhost podman[28416]: 2025-10-05 07:44:34.677047485 +0000 UTC m=+0.140646779 container init c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, RELEASE=main) Oct 5 03:44:34 localhost podman[28416]: 2025-10-05 07:44:34.57987689 +0000 UTC m=+0.043476204 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:34 localhost podman[28416]: 2025-10-05 07:44:34.687529871 +0000 UTC m=+0.151129165 container start c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55) Oct 5 03:44:34 localhost podman[28416]: 2025-10-05 07:44:34.687759737 +0000 UTC m=+0.151359041 container attach c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_BRANCH=main, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main) Oct 5 03:44:34 localhost systemd[1]: libpod-c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32.scope: Deactivated successfully. Oct 5 03:44:34 localhost pedantic_wu[28432]: 167 167 Oct 5 03:44:34 localhost podman[28416]: 2025-10-05 07:44:34.691727908 +0000 UTC m=+0.155327212 container died c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=553, version=7, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Oct 5 03:44:34 localhost podman[28437]: 2025-10-05 07:44:34.780515691 +0000 UTC m=+0.075786334 container remove c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, architecture=x86_64) Oct 5 03:44:34 localhost systemd[1]: libpod-conmon-c2125a1a58ffb503da96d4db7e19d6a4dad5727506fb19fb9125c2484903bd32.scope: Deactivated successfully. Oct 5 03:44:34 localhost systemd[1]: Reloading. Oct 5 03:44:34 localhost systemd-sysv-generator[28481]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:34 localhost systemd-rc-local-generator[28477]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:35 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:35 localhost systemd[1]: Reloading. Oct 5 03:44:35 localhost systemd-rc-local-generator[28515]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:35 localhost systemd-sysv-generator[28520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:35 localhost systemd[1]: Reached target All Ceph clusters and services. Oct 5 03:44:35 localhost systemd[1]: Reloading. Oct 5 03:44:35 localhost systemd-sysv-generator[28560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:35 localhost systemd-rc-local-generator[28556]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:35 localhost systemd[1]: Reached target Ceph cluster 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:35 localhost systemd[1]: Reloading. Oct 5 03:44:35 localhost systemd-sysv-generator[28598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:35 localhost systemd-rc-local-generator[28595]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:35 localhost systemd[1]: Reloading. Oct 5 03:44:35 localhost systemd-rc-local-generator[28637]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:35 localhost systemd-sysv-generator[28640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:36 localhost systemd[1]: Created slice Slice /system/ceph-659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:36 localhost systemd[1]: Reached target System Time Set. Oct 5 03:44:36 localhost systemd[1]: Reached target System Time Synchronized. Oct 5 03:44:36 localhost systemd[1]: Starting Ceph crash.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 03:44:36 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:36 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:36 localhost podman[28698]: Oct 5 03:44:36 localhost podman[28698]: 2025-10-05 07:44:36.40811159 +0000 UTC m=+0.070758866 container create 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, release=553, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 5 03:44:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f437ddfc2db67093e15fc4d801f0c84e21f1fd084410982607a4963e4647fc7/merged/etc/ceph/ceph.client.crash.np0005471150.keyring supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f437ddfc2db67093e15fc4d801f0c84e21f1fd084410982607a4963e4647fc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:36 localhost podman[28698]: 2025-10-05 07:44:36.379985596 +0000 UTC m=+0.042632872 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f437ddfc2db67093e15fc4d801f0c84e21f1fd084410982607a4963e4647fc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:36 localhost podman[28698]: 2025-10-05 07:44:36.503557872 +0000 UTC m=+0.166205158 container init 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, architecture=x86_64, distribution-scope=public, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:36 localhost podman[28698]: 2025-10-05 07:44:36.514557701 +0000 UTC m=+0.177204977 container start 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, release=553, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 03:44:36 localhost bash[28698]: 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 Oct 5 03:44:36 localhost systemd[1]: Started Ceph crash.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: INFO:ceph-crash:pinging cluster to exercise our key Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.681+0000 7f717416f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.681+0000 7f717416f640 -1 AuthRegistry(0x7f716c0680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.683+0000 7f717416f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.683+0000 7f717416f640 -1 AuthRegistry(0x7f717416e000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.690+0000 7f71726e5640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.691+0000 7f71716e3640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.691+0000 7f7171ee4640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: 2025-10-05T07:44:36.691+0000 7f717416f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: [errno 13] RADOS permission denied (error connecting to the cluster) Oct 5 03:44:36 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150[28712]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Oct 5 03:44:44 localhost podman[28797]: Oct 5 03:44:44 localhost podman[28797]: 2025-10-05 07:44:44.683243886 +0000 UTC m=+0.068910929 container create 0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_galois, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 03:44:44 localhost systemd[1]: Started libpod-conmon-0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc.scope. Oct 5 03:44:44 localhost systemd[1]: Started libcrun container. Oct 5 03:44:44 localhost podman[28797]: 2025-10-05 07:44:44.657386521 +0000 UTC m=+0.043053574 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:44 localhost podman[28797]: 2025-10-05 07:44:44.75864311 +0000 UTC m=+0.144310143 container init 0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_galois, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git) Oct 5 03:44:44 localhost podman[28797]: 2025-10-05 07:44:44.768368637 +0000 UTC m=+0.154035670 container start 0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_galois, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:44:44 localhost podman[28797]: 2025-10-05 07:44:44.768683535 +0000 UTC m=+0.154350618 container attach 0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_galois, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:44:44 localhost eager_galois[28813]: 167 167 Oct 5 03:44:44 localhost systemd[1]: libpod-0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc.scope: Deactivated successfully. Oct 5 03:44:44 localhost podman[28797]: 2025-10-05 07:44:44.772611954 +0000 UTC m=+0.158278997 container died 0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_galois, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Oct 5 03:44:44 localhost systemd[1]: var-lib-containers-storage-overlay-d539d57e43704d84dccef9862b9673e3117ecae2c7dc2fd223b78ff171125589-merged.mount: Deactivated successfully. Oct 5 03:44:44 localhost podman[28818]: 2025-10-05 07:44:44.857609431 +0000 UTC m=+0.075971679 container remove 0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_galois, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph) Oct 5 03:44:44 localhost systemd[1]: libpod-conmon-0079329c09d92451660d9f724b27c9749393bc439b4bd49a5a5adfe39343defc.scope: Deactivated successfully. Oct 5 03:44:45 localhost podman[28838]: Oct 5 03:44:45 localhost podman[28838]: 2025-10-05 07:44:45.066989074 +0000 UTC m=+0.072686606 container create 38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_shamir, release=553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:45 localhost systemd[1]: Started libpod-conmon-38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e.scope. Oct 5 03:44:45 localhost systemd[1]: Started libcrun container. Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f878eaaa2e1c8bab955225972ddda1fdbdc09c4e4c8f9b96de304a43a82cf8/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost podman[28838]: 2025-10-05 07:44:45.037514156 +0000 UTC m=+0.043211688 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f878eaaa2e1c8bab955225972ddda1fdbdc09c4e4c8f9b96de304a43a82cf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f878eaaa2e1c8bab955225972ddda1fdbdc09c4e4c8f9b96de304a43a82cf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f878eaaa2e1c8bab955225972ddda1fdbdc09c4e4c8f9b96de304a43a82cf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f878eaaa2e1c8bab955225972ddda1fdbdc09c4e4c8f9b96de304a43a82cf8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost podman[28838]: 2025-10-05 07:44:45.191869283 +0000 UTC m=+0.197566795 container init 38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_shamir, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:45 localhost podman[28838]: 2025-10-05 07:44:45.201506627 +0000 UTC m=+0.207204149 container start 38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_shamir, vendor=Red Hat, Inc., version=7, vcs-type=git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=553, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:44:45 localhost podman[28838]: 2025-10-05 07:44:45.201783974 +0000 UTC m=+0.207481506 container attach 38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_shamir, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , RELEASE=main, version=7) Oct 5 03:44:45 localhost gracious_shamir[28853]: --> passed data devices: 0 physical, 2 LVM Oct 5 03:44:45 localhost gracious_shamir[28853]: --> relative data size: 1.0 Oct 5 03:44:45 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:45 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6e8ec8b7-b90f-4f54-843d-de869bc345c2 Oct 5 03:44:46 localhost lvm[28907]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:44:46 localhost lvm[28907]: VG ceph_vg0 finished Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap Oct 5 03:44:46 localhost gracious_shamir[28853]: stderr: got monmap epoch 3 Oct 5 03:44:46 localhost gracious_shamir[28853]: --> Creating keyring file for osd.1 Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Oct 5 03:44:46 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 6e8ec8b7-b90f-4f54-843d-de869bc345c2 --setuser ceph --setgroup ceph Oct 5 03:44:49 localhost gracious_shamir[28853]: stderr: 2025-10-05T07:44:46.842+0000 7f8c4f668a80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Oct 5 03:44:49 localhost gracious_shamir[28853]: stderr: 2025-10-05T07:44:46.842+0000 7f8c4f668a80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid Oct 5 03:44:49 localhost gracious_shamir[28853]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Oct 5 03:44:49 localhost gracious_shamir[28853]: --> ceph-volume lvm activate successful for osd ID: 1 Oct 5 03:44:49 localhost gracious_shamir[28853]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:49 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 73c62746-a7e0-43c7-afb1-d0d460437f43 Oct 5 03:44:50 localhost lvm[29837]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:44:50 localhost lvm[29837]: VG ceph_vg1 finished Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4 Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap Oct 5 03:44:50 localhost gracious_shamir[28853]: stderr: got monmap epoch 3 Oct 5 03:44:50 localhost gracious_shamir[28853]: --> Creating keyring file for osd.4 Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/ Oct 5 03:44:50 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid 73c62746-a7e0-43c7-afb1-d0d460437f43 --setuser ceph --setgroup ceph Oct 5 03:44:53 localhost gracious_shamir[28853]: stderr: 2025-10-05T07:44:50.675+0000 7f753e39aa80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Oct 5 03:44:53 localhost gracious_shamir[28853]: stderr: 2025-10-05T07:44:50.675+0000 7f753e39aa80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid Oct 5 03:44:53 localhost gracious_shamir[28853]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Oct 5 03:44:53 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 5 03:44:53 localhost gracious_shamir[28853]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config Oct 5 03:44:53 localhost gracious_shamir[28853]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 5 03:44:53 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block Oct 5 03:44:53 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:44:53 localhost gracious_shamir[28853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 5 03:44:53 localhost gracious_shamir[28853]: --> ceph-volume lvm activate successful for osd ID: 4 Oct 5 03:44:53 localhost gracious_shamir[28853]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Oct 5 03:44:53 localhost systemd[1]: libpod-38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e.scope: Deactivated successfully. Oct 5 03:44:53 localhost systemd[1]: libpod-38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e.scope: Consumed 3.742s CPU time. Oct 5 03:44:53 localhost podman[28838]: 2025-10-05 07:44:53.282529309 +0000 UTC m=+8.288226851 container died 38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_shamir, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:44:53 localhost systemd[1]: var-lib-containers-storage-overlay-70f878eaaa2e1c8bab955225972ddda1fdbdc09c4e4c8f9b96de304a43a82cf8-merged.mount: Deactivated successfully. Oct 5 03:44:53 localhost podman[30737]: 2025-10-05 07:44:53.37872595 +0000 UTC m=+0.079242142 container remove 38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_shamir, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:53 localhost systemd[1]: libpod-conmon-38829c78d049307aa710a7fc620baca1bbdf371e7eb18ae221971f1fb1f20d6e.scope: Deactivated successfully. Oct 5 03:44:54 localhost podman[30820]: Oct 5 03:44:54 localhost podman[30820]: 2025-10-05 07:44:54.105508252 +0000 UTC m=+0.069272609 container create e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_yonath, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, name=rhceph) Oct 5 03:44:54 localhost systemd[1]: Started libpod-conmon-e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2.scope. Oct 5 03:44:54 localhost systemd[1]: Started libcrun container. Oct 5 03:44:54 localhost podman[30820]: 2025-10-05 07:44:54.169195897 +0000 UTC m=+0.132960254 container init e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_yonath, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Oct 5 03:44:54 localhost podman[30820]: 2025-10-05 07:44:54.075943821 +0000 UTC m=+0.039708198 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:54 localhost podman[30820]: 2025-10-05 07:44:54.181733036 +0000 UTC m=+0.145497393 container start e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_yonath, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.tags=rhceph ceph, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Oct 5 03:44:54 localhost podman[30820]: 2025-10-05 07:44:54.182049754 +0000 UTC m=+0.145814141 container attach e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_yonath, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 03:44:54 localhost wonderful_yonath[30835]: 167 167 Oct 5 03:44:54 localhost systemd[1]: libpod-e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2.scope: Deactivated successfully. Oct 5 03:44:54 localhost podman[30820]: 2025-10-05 07:44:54.185130482 +0000 UTC m=+0.148894879 container died e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_yonath, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Oct 5 03:44:54 localhost podman[30840]: 2025-10-05 07:44:54.266974418 +0000 UTC m=+0.073689340 container remove e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_yonath, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 5 03:44:54 localhost systemd[1]: libpod-conmon-e2de6c1a5fc23782acec2f8469c66c80c440873bf7c9c7cb05e3dd3c55cbd9f2.scope: Deactivated successfully. Oct 5 03:44:54 localhost systemd[1]: tmp-crun.0EG5AJ.mount: Deactivated successfully. Oct 5 03:44:54 localhost systemd[1]: var-lib-containers-storage-overlay-085f2ced9c861ca955f83d5f2c304b817b8f785ab7bb4272dde25184b18fa091-merged.mount: Deactivated successfully. Oct 5 03:44:54 localhost podman[30862]: Oct 5 03:44:54 localhost podman[30862]: 2025-10-05 07:44:54.46760751 +0000 UTC m=+0.070567832 container create aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bell, description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, name=rhceph, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Oct 5 03:44:54 localhost systemd[1]: Started libpod-conmon-aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f.scope. Oct 5 03:44:54 localhost systemd[1]: Started libcrun container. Oct 5 03:44:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314bb569a3a149c7da5de2f19ca385a47f8ce617ac53b771b01e7fca637b24ec/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:54 localhost podman[30862]: 2025-10-05 07:44:54.439741153 +0000 UTC m=+0.042701475 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314bb569a3a149c7da5de2f19ca385a47f8ce617ac53b771b01e7fca637b24ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314bb569a3a149c7da5de2f19ca385a47f8ce617ac53b771b01e7fca637b24ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:54 localhost podman[30862]: 2025-10-05 07:44:54.565297499 +0000 UTC m=+0.168257831 container init aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bell, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, ceph=True, version=7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12) Oct 5 03:44:54 localhost systemd[1]: tmp-crun.XpjKmr.mount: Deactivated successfully. Oct 5 03:44:54 localhost podman[30862]: 2025-10-05 07:44:54.578891084 +0000 UTC m=+0.181851386 container start aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bell, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True) Oct 5 03:44:54 localhost podman[30862]: 2025-10-05 07:44:54.579205302 +0000 UTC m=+0.182165634 container attach aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bell, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, name=rhceph, version=7, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:44:54 localhost busy_bell[30878]: { Oct 5 03:44:54 localhost busy_bell[30878]: "1": [ Oct 5 03:44:54 localhost busy_bell[30878]: { Oct 5 03:44:54 localhost busy_bell[30878]: "devices": [ Oct 5 03:44:54 localhost busy_bell[30878]: "/dev/loop3" Oct 5 03:44:54 localhost busy_bell[30878]: ], Oct 5 03:44:54 localhost busy_bell[30878]: "lv_name": "ceph_lv0", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_size": "7511998464", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cHuUfu-mDdg-WOEW-vM6Y-R0jh-a4Gc-auzrG4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=659062ac-50b4-5607-b699-3105da7f55ee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6e8ec8b7-b90f-4f54-843d-de869bc345c2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_uuid": "cHuUfu-mDdg-WOEW-vM6Y-R0jh-a4Gc-auzrG4", Oct 5 03:44:54 localhost busy_bell[30878]: "name": "ceph_lv0", Oct 5 03:44:54 localhost busy_bell[30878]: "path": "/dev/ceph_vg0/ceph_lv0", Oct 5 03:44:54 localhost busy_bell[30878]: "tags": { Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.block_uuid": "cHuUfu-mDdg-WOEW-vM6Y-R0jh-a4Gc-auzrG4", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.cephx_lockbox_secret": "", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.cluster_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.cluster_name": "ceph", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.crush_device_class": "", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.encrypted": "0", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.osd_fsid": "6e8ec8b7-b90f-4f54-843d-de869bc345c2", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.osd_id": "1", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.osdspec_affinity": "default_drive_group", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.type": "block", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.vdo": "0" Oct 5 03:44:54 localhost busy_bell[30878]: }, Oct 5 03:44:54 localhost busy_bell[30878]: "type": "block", Oct 5 03:44:54 localhost busy_bell[30878]: "vg_name": "ceph_vg0" Oct 5 03:44:54 localhost busy_bell[30878]: } Oct 5 03:44:54 localhost busy_bell[30878]: ], Oct 5 03:44:54 localhost busy_bell[30878]: "4": [ Oct 5 03:44:54 localhost busy_bell[30878]: { Oct 5 03:44:54 localhost busy_bell[30878]: "devices": [ Oct 5 03:44:54 localhost busy_bell[30878]: "/dev/loop4" Oct 5 03:44:54 localhost busy_bell[30878]: ], Oct 5 03:44:54 localhost busy_bell[30878]: "lv_name": "ceph_lv1", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_size": "7511998464", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=DdRWN8-NV20-zbAf-XiJj-3Kn9-ez9b-dBE7mI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=659062ac-50b4-5607-b699-3105da7f55ee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=73c62746-a7e0-43c7-afb1-d0d460437f43,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Oct 5 03:44:54 localhost busy_bell[30878]: "lv_uuid": "DdRWN8-NV20-zbAf-XiJj-3Kn9-ez9b-dBE7mI", Oct 5 03:44:54 localhost busy_bell[30878]: "name": "ceph_lv1", Oct 5 03:44:54 localhost busy_bell[30878]: "path": "/dev/ceph_vg1/ceph_lv1", Oct 5 03:44:54 localhost busy_bell[30878]: "tags": { Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.block_uuid": "DdRWN8-NV20-zbAf-XiJj-3Kn9-ez9b-dBE7mI", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.cephx_lockbox_secret": "", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.cluster_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.cluster_name": "ceph", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.crush_device_class": "", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.encrypted": "0", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.osd_fsid": "73c62746-a7e0-43c7-afb1-d0d460437f43", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.osd_id": "4", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.osdspec_affinity": "default_drive_group", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.type": "block", Oct 5 03:44:54 localhost busy_bell[30878]: "ceph.vdo": "0" Oct 5 03:44:54 localhost busy_bell[30878]: }, Oct 5 03:44:54 localhost busy_bell[30878]: "type": "block", Oct 5 03:44:54 localhost busy_bell[30878]: "vg_name": "ceph_vg1" Oct 5 03:44:54 localhost busy_bell[30878]: } Oct 5 03:44:54 localhost busy_bell[30878]: ] Oct 5 03:44:54 localhost busy_bell[30878]: } Oct 5 03:44:54 localhost systemd[1]: libpod-aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f.scope: Deactivated successfully. Oct 5 03:44:54 localhost podman[30862]: 2025-10-05 07:44:54.914287214 +0000 UTC m=+0.517247566 container died aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bell, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main) Oct 5 03:44:55 localhost podman[30887]: 2025-10-05 07:44:55.006474493 +0000 UTC m=+0.081224532 container remove aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_bell, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, distribution-scope=public, name=rhceph, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 03:44:55 localhost systemd[1]: libpod-conmon-aebb69089566f47936511a96cf8a9dd9c530043dade1c43c0c7552f699f14c9f.scope: Deactivated successfully. Oct 5 03:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-314bb569a3a149c7da5de2f19ca385a47f8ce617ac53b771b01e7fca637b24ec-merged.mount: Deactivated successfully. Oct 5 03:44:55 localhost podman[30971]: Oct 5 03:44:55 localhost podman[30971]: 2025-10-05 07:44:55.675452778 +0000 UTC m=+0.044854379 container create 64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_fermi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, version=7, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph) Oct 5 03:44:55 localhost systemd[1]: Started libpod-conmon-64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43.scope. Oct 5 03:44:55 localhost systemd[1]: Started libcrun container. Oct 5 03:44:55 localhost podman[30971]: 2025-10-05 07:44:55.723895947 +0000 UTC m=+0.093297558 container init 64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_fermi, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:55 localhost podman[30971]: 2025-10-05 07:44:55.731625303 +0000 UTC m=+0.101026914 container start 64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_fermi, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, name=rhceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7) Oct 5 03:44:55 localhost podman[30971]: 2025-10-05 07:44:55.731821288 +0000 UTC m=+0.101222899 container attach 64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_fermi, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc.) Oct 5 03:44:55 localhost fervent_fermi[30986]: 167 167 Oct 5 03:44:55 localhost systemd[1]: libpod-64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43.scope: Deactivated successfully. Oct 5 03:44:55 localhost podman[30971]: 2025-10-05 07:44:55.7362042 +0000 UTC m=+0.105605841 container died 64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_fermi, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, version=7, release=553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Oct 5 03:44:55 localhost podman[30971]: 2025-10-05 07:44:55.660235802 +0000 UTC m=+0.029637413 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:55 localhost podman[30991]: 2025-10-05 07:44:55.81777209 +0000 UTC m=+0.070397348 container remove 64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_fermi, version=7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Oct 5 03:44:55 localhost systemd[1]: libpod-conmon-64707aea2bb8670d4748217bea480cf4e187e9072de51a4a3546aed0cd6cbf43.scope: Deactivated successfully. Oct 5 03:44:56 localhost podman[31020]: Oct 5 03:44:56 localhost podman[31020]: 2025-10-05 07:44:56.110371344 +0000 UTC m=+0.060949408 container create c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test, architecture=x86_64, name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55) Oct 5 03:44:56 localhost systemd[1]: Started libpod-conmon-c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798.scope. Oct 5 03:44:56 localhost systemd[1]: Started libcrun container. Oct 5 03:44:56 localhost podman[31020]: 2025-10-05 07:44:56.081071871 +0000 UTC m=+0.031649935 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417081fc775e16eb40dd35cc018d97e62e78772dd46dd851f135a3c4f7e71389/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417081fc775e16eb40dd35cc018d97e62e78772dd46dd851f135a3c4f7e71389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417081fc775e16eb40dd35cc018d97e62e78772dd46dd851f135a3c4f7e71389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417081fc775e16eb40dd35cc018d97e62e78772dd46dd851f135a3c4f7e71389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417081fc775e16eb40dd35cc018d97e62e78772dd46dd851f135a3c4f7e71389/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost podman[31020]: 2025-10-05 07:44:56.23553135 +0000 UTC m=+0.186109414 container init c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=) Oct 5 03:44:56 localhost podman[31020]: 2025-10-05 07:44:56.246023026 +0000 UTC m=+0.196601090 container start c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:44:56 localhost podman[31020]: 2025-10-05 07:44:56.246369485 +0000 UTC m=+0.196947579 container attach c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-1db25f3c46783b239d95d5c3c58d5f49eaa6ba5350175c25f36162bf9eda3206-merged.mount: Deactivated successfully. Oct 5 03:44:56 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test[31036]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Oct 5 03:44:56 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test[31036]: [--no-systemd] [--no-tmpfs] Oct 5 03:44:56 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test[31036]: ceph-volume activate: error: unrecognized arguments: --bad-option Oct 5 03:44:56 localhost systemd[1]: libpod-c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798.scope: Deactivated successfully. Oct 5 03:44:56 localhost podman[31020]: 2025-10-05 07:44:56.465663029 +0000 UTC m=+0.416241133 container died c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_BRANCH=main, release=553, architecture=x86_64) Oct 5 03:44:56 localhost systemd[1]: tmp-crun.cO5nUb.mount: Deactivated successfully. Oct 5 03:44:56 localhost systemd-journald[619]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 5 03:44:56 localhost systemd-journald[619]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 03:44:56 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-417081fc775e16eb40dd35cc018d97e62e78772dd46dd851f135a3c4f7e71389-merged.mount: Deactivated successfully. Oct 5 03:44:56 localhost podman[31041]: 2025-10-05 07:44:56.541998096 +0000 UTC m=+0.069815263 container remove c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate-test, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, architecture=x86_64, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Oct 5 03:44:56 localhost systemd[1]: libpod-conmon-c6f1e56faccaeb0f90063eef9edde74642270fdb7e2ad2dcb8cc55fe29354798.scope: Deactivated successfully. Oct 5 03:44:56 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:44:56 localhost systemd[1]: Reloading. Oct 5 03:44:56 localhost systemd-rc-local-generator[31097]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:56 localhost systemd-sysv-generator[31100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:57 localhost systemd[1]: Reloading. Oct 5 03:44:57 localhost systemd-rc-local-generator[31138]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:57 localhost systemd-sysv-generator[31143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:57 localhost systemd[1]: Starting Ceph osd.1 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 03:44:57 localhost podman[31201]: Oct 5 03:44:57 localhost podman[31201]: 2025-10-05 07:44:57.677328355 +0000 UTC m=+0.067237247 container create 7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55) Oct 5 03:44:57 localhost systemd[1]: Started libcrun container. Oct 5 03:44:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe0bf8ef22a3118616d15781fdb86055e70fa38a5c719c4cf53576a24e53c70/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:57 localhost podman[31201]: 2025-10-05 07:44:57.649915569 +0000 UTC m=+0.039824451 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe0bf8ef22a3118616d15781fdb86055e70fa38a5c719c4cf53576a24e53c70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe0bf8ef22a3118616d15781fdb86055e70fa38a5c719c4cf53576a24e53c70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe0bf8ef22a3118616d15781fdb86055e70fa38a5c719c4cf53576a24e53c70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe0bf8ef22a3118616d15781fdb86055e70fa38a5c719c4cf53576a24e53c70/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:57 localhost podman[31201]: 2025-10-05 07:44:57.79459769 +0000 UTC m=+0.184506532 container init 7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:57 localhost systemd[1]: tmp-crun.1oFJ0l.mount: Deactivated successfully. Oct 5 03:44:57 localhost podman[31201]: 2025-10-05 07:44:57.806707788 +0000 UTC m=+0.196616670 container start 7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Oct 5 03:44:57 localhost podman[31201]: 2025-10-05 07:44:57.807026706 +0000 UTC m=+0.196935568 container attach 7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Oct 5 03:44:58 localhost bash[31201]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost bash[31201]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost bash[31201]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:58 localhost bash[31201]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:58 localhost bash[31201]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Oct 5 03:44:58 localhost bash[31201]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate[31215]: --> ceph-volume raw activate successful for osd ID: 1 Oct 5 03:44:58 localhost bash[31201]: --> ceph-volume raw activate successful for osd ID: 1 Oct 5 03:44:58 localhost systemd[1]: libpod-7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1.scope: Deactivated successfully. Oct 5 03:44:58 localhost podman[31201]: 2025-10-05 07:44:58.489617646 +0000 UTC m=+0.879526608 container died 7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Oct 5 03:44:58 localhost podman[31330]: 2025-10-05 07:44:58.588486805 +0000 UTC m=+0.084290060 container remove 7e14cb5a5cde09fedaef6115c34e5a537f40642682e325adce40ba53e295e1a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1-activate, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-dfe0bf8ef22a3118616d15781fdb86055e70fa38a5c719c4cf53576a24e53c70-merged.mount: Deactivated successfully. Oct 5 03:44:58 localhost podman[31390]: Oct 5 03:44:58 localhost podman[31390]: 2025-10-05 07:44:58.896716386 +0000 UTC m=+0.069664789 container create 94a4a4ebb3178a648320b29a893a5d9d2bf12784235ec0e772acdc4fe658039a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, RELEASE=main, release=553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4176cb56b8fa36bd7ada175b71f0fc3ee801f2912b81136c524eedcee5cda3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost podman[31390]: 2025-10-05 07:44:58.867917296 +0000 UTC m=+0.040865699 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4176cb56b8fa36bd7ada175b71f0fc3ee801f2912b81136c524eedcee5cda3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4176cb56b8fa36bd7ada175b71f0fc3ee801f2912b81136c524eedcee5cda3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4176cb56b8fa36bd7ada175b71f0fc3ee801f2912b81136c524eedcee5cda3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4176cb56b8fa36bd7ada175b71f0fc3ee801f2912b81136c524eedcee5cda3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost podman[31390]: 2025-10-05 07:44:59.014182177 +0000 UTC m=+0.187130580 container init 94a4a4ebb3178a648320b29a893a5d9d2bf12784235ec0e772acdc4fe658039a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, release=553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux ) Oct 5 03:44:59 localhost podman[31390]: 2025-10-05 07:44:59.024513568 +0000 UTC m=+0.197461981 container start 94a4a4ebb3178a648320b29a893a5d9d2bf12784235ec0e772acdc4fe658039a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1, distribution-scope=public, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, version=7, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main) Oct 5 03:44:59 localhost bash[31390]: 94a4a4ebb3178a648320b29a893a5d9d2bf12784235ec0e772acdc4fe658039a Oct 5 03:44:59 localhost systemd[1]: Started Ceph osd.1 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:59 localhost ceph-osd[31409]: set uid:gid to 167:167 (ceph:ceph) Oct 5 03:44:59 localhost ceph-osd[31409]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Oct 5 03:44:59 localhost ceph-osd[31409]: pidfile_write: ignore empty --pid-file Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31409]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) close Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) close Oct 5 03:44:59 localhost ceph-osd[31409]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal Oct 5 03:44:59 localhost ceph-osd[31409]: load: jerasure load: lrc Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) close Oct 5 03:44:59 localhost systemd[1]: tmp-crun.LzDqFx.mount: Deactivated successfully. Oct 5 03:44:59 localhost podman[31500]: Oct 5 03:44:59 localhost podman[31500]: 2025-10-05 07:44:59.793255065 +0000 UTC m=+0.072784278 container create 5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_stonebraker, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:44:59 localhost systemd[1]: Started libpod-conmon-5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b.scope. Oct 5 03:44:59 localhost systemd[1]: Started libcrun container. Oct 5 03:44:59 localhost podman[31500]: 2025-10-05 07:44:59.762071924 +0000 UTC m=+0.041601147 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:59 localhost podman[31500]: 2025-10-05 07:44:59.87030797 +0000 UTC m=+0.149837193 container init 5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_stonebraker, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public) Oct 5 03:44:59 localhost podman[31500]: 2025-10-05 07:44:59.881641687 +0000 UTC m=+0.161170910 container start 5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_stonebraker, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:44:59 localhost podman[31500]: 2025-10-05 07:44:59.881848383 +0000 UTC m=+0.161377657 container attach 5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_stonebraker, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, version=7, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Oct 5 03:44:59 localhost objective_stonebraker[31515]: 167 167 Oct 5 03:44:59 localhost systemd[1]: libpod-5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b.scope: Deactivated successfully. Oct 5 03:44:59 localhost podman[31500]: 2025-10-05 07:44:59.886005718 +0000 UTC m=+0.165534961 container died 5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_stonebraker, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7, vendor=Red Hat, Inc., release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=) Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:44:59 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) close Oct 5 03:44:59 localhost podman[31520]: 2025-10-05 07:44:59.97554221 +0000 UTC m=+0.079909509 container remove 5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_stonebraker, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553) Oct 5 03:44:59 localhost systemd[1]: libpod-conmon-5e9ea0dada2a1e9ee8120e23a4dcd0999dcfb63ae455f5e4c71218ccabe7021b.scope: Deactivated successfully. Oct 5 03:45:00 localhost ceph-osd[31409]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621ae00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs mount Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs mount shared_bdev_used = 0 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Git sha 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DB SUMMARY Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DB Session ID: X93KPK2KGIH58L2X2JYB Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.env: 0x564bb64aecb0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.info_log: 0x564bb71aa340 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.statistics: (nil) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.db_log_dir: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_manager: 0x564bb6204140 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.row_cache: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Compression algorithms supported: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb71aa720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b423dd4-e3bc-4af2-8731-1ef7c15f126d Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300196818, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300197130, "job": 1, "event": "recovery_finished"} Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000 Oct 5 03:45:00 localhost ceph-osd[31409]: freelist init Oct 5 03:45:00 localhost ceph-osd[31409]: freelist _read_cfg Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs umount Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) close Oct 5 03:45:00 localhost podman[31746]: Oct 5 03:45:00 localhost podman[31746]: 2025-10-05 07:45:00.327689886 +0000 UTC m=+0.076050870 container create 33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test, release=553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:45:00 localhost systemd[1]: Started libpod-conmon-33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6.scope. Oct 5 03:45:00 localhost podman[31746]: 2025-10-05 07:45:00.295728825 +0000 UTC m=+0.044089809 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:00 localhost systemd[1]: Started libcrun container. Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3deeec6fc10e99e0afce64dcaebaba7602f6d8bc6ca3479c3f92d88abd499b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3deeec6fc10e99e0afce64dcaebaba7602f6d8bc6ca3479c3f92d88abd499b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3deeec6fc10e99e0afce64dcaebaba7602f6d8bc6ca3479c3f92d88abd499b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3deeec6fc10e99e0afce64dcaebaba7602f6d8bc6ca3479c3f92d88abd499b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3deeec6fc10e99e0afce64dcaebaba7602f6d8bc6ca3479c3f92d88abd499b/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost podman[31746]: 2025-10-05 07:45:00.473775843 +0000 UTC m=+0.222136837 container init 33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31409]: bdev(0x564bb621b180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs mount Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:00 localhost ceph-osd[31409]: bluefs mount shared_bdev_used = 4718592 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:00 localhost podman[31746]: 2025-10-05 07:45:00.488194159 +0000 UTC m=+0.236555123 container start 33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, release=553, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Git sha 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DB SUMMARY Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DB Session ID: X93KPK2KGIH58L2X2JYA Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.env: 0x564bb64afce0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.info_log: 0x564bb71aae60 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.statistics: (nil) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.db_log_dir: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_manager: 0x564bb6205540 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.row_cache: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Compression algorithms supported: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost podman[31746]: 2025-10-05 07:45:00.491690987 +0000 UTC m=+0.240051991 container attach 33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux ) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702ad60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702a880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702a880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564bb702a880)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564bb61f22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b423dd4-e3bc-4af2-8731-1ef7c15f126d Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300507981, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300515373, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650300, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b423dd4-e3bc-4af2-8731-1ef7c15f126d", "db_session_id": "X93KPK2KGIH58L2X2JYA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300519919, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650300, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b423dd4-e3bc-4af2-8731-1ef7c15f126d", "db_session_id": "X93KPK2KGIH58L2X2JYA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300524774, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650300, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b423dd4-e3bc-4af2-8731-1ef7c15f126d", "db_session_id": "X93KPK2KGIH58L2X2JYA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300528919, "job": 1, "event": "recovery_finished"} Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564bb621b500 Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: DB pointer 0x564bb7103a00 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4 Oct 5 03:45:00 localhost ceph-osd[31409]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012 Oct 5 03:45:00 localhost ceph-osd[31409]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Oct 5 03:45:00 localhost ceph-osd[31409]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Oct 5 03:45:00 localhost ceph-osd[31409]: _get_class not permitted to load lua Oct 5 03:45:00 localhost ceph-osd[31409]: _get_class not permitted to load sdk Oct 5 03:45:00 localhost ceph-osd[31409]: _get_class not permitted to load test_remote_reads Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 load_pgs Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 load_pgs opened 0 pgs Oct 5 03:45:00 localhost ceph-osd[31409]: osd.1 0 log_to_monitors true Oct 5 03:45:00 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1[31405]: 2025-10-05T07:45:00.567+0000 7f6731dc9a80 -1 osd.1 0 log_to_monitors true Oct 5 03:45:00 localhost systemd[1]: var-lib-containers-storage-overlay-b3e7c18bf48897913e66e7c713afad94aba2fed392cc4f8867e76f7b9ff700d7-merged.mount: Deactivated successfully. Oct 5 03:45:00 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test[31761]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Oct 5 03:45:00 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test[31761]: [--no-systemd] [--no-tmpfs] Oct 5 03:45:00 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test[31761]: ceph-volume activate: error: unrecognized arguments: --bad-option Oct 5 03:45:00 localhost systemd[1]: libpod-33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6.scope: Deactivated successfully. Oct 5 03:45:00 localhost podman[31746]: 2025-10-05 07:45:00.721936399 +0000 UTC m=+0.470297343 container died 33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, name=rhceph, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:45:00 localhost systemd[1]: var-lib-containers-storage-overlay-7a3deeec6fc10e99e0afce64dcaebaba7602f6d8bc6ca3479c3f92d88abd499b-merged.mount: Deactivated successfully. Oct 5 03:45:00 localhost podman[31981]: 2025-10-05 07:45:00.809629975 +0000 UTC m=+0.078080372 container remove 33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate-test, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, architecture=x86_64, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, release=553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:45:00 localhost systemd[1]: libpod-conmon-33d6bb34135f269745effc82446219d8dcaf8ad5a41a46240307d947faba1dd6.scope: Deactivated successfully. Oct 5 03:45:01 localhost systemd[1]: Reloading. Oct 5 03:45:01 localhost systemd-rc-local-generator[32036]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:45:01 localhost systemd-sysv-generator[32040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:45:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:45:01 localhost systemd[1]: Reloading. Oct 5 03:45:01 localhost systemd-rc-local-generator[32073]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:45:01 localhost systemd-sysv-generator[32076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:45:01 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Oct 5 03:45:01 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Oct 5 03:45:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:45:01 localhost systemd[1]: Starting Ceph osd.4 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 03:45:02 localhost podman[32139]: Oct 5 03:45:02 localhost podman[32139]: 2025-10-05 07:45:02.016013036 +0000 UTC m=+0.073987318 container create 8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, ceph=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, release=553, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:45:02 localhost systemd[1]: Started libcrun container. Oct 5 03:45:02 localhost podman[32139]: 2025-10-05 07:45:01.985390249 +0000 UTC m=+0.043364531 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be3884f0768d5ee529412588a60a7966164d858872f266e47860845eb9fd378/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be3884f0768d5ee529412588a60a7966164d858872f266e47860845eb9fd378/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be3884f0768d5ee529412588a60a7966164d858872f266e47860845eb9fd378/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be3884f0768d5ee529412588a60a7966164d858872f266e47860845eb9fd378/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be3884f0768d5ee529412588a60a7966164d858872f266e47860845eb9fd378/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 done with init, starting boot process Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 start_boot Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1 Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Oct 5 03:45:02 localhost ceph-osd[31409]: osd.1 0 bench count 12288000 bsize 4 KiB Oct 5 03:45:02 localhost podman[32139]: 2025-10-05 07:45:02.146406255 +0000 UTC m=+0.204380497 container init 8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=553, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Oct 5 03:45:02 localhost podman[32139]: 2025-10-05 07:45:02.170318511 +0000 UTC m=+0.228292753 container start 8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:45:02 localhost podman[32139]: 2025-10-05 07:45:02.170940418 +0000 UTC m=+0.228914690 container attach 8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 5 03:45:02 localhost bash[32139]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost bash[32139]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost bash[32139]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:45:02 localhost bash[32139]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:02 localhost bash[32139]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 5 03:45:02 localhost bash[32139]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate[32153]: --> ceph-volume raw activate successful for osd ID: 4 Oct 5 03:45:02 localhost bash[32139]: --> ceph-volume raw activate successful for osd ID: 4 Oct 5 03:45:02 localhost systemd[1]: libpod-8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca.scope: Deactivated successfully. Oct 5 03:45:02 localhost podman[32139]: 2025-10-05 07:45:02.926634632 +0000 UTC m=+0.984608954 container died 8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Oct 5 03:45:03 localhost systemd[1]: var-lib-containers-storage-overlay-4be3884f0768d5ee529412588a60a7966164d858872f266e47860845eb9fd378-merged.mount: Deactivated successfully. Oct 5 03:45:03 localhost podman[32285]: 2025-10-05 07:45:03.039388004 +0000 UTC m=+0.099315671 container remove 8c2eea8e28511f691e4e1d7e085546564569e037864b523bb9279a5ad40794ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4-activate, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553) Oct 5 03:45:03 localhost podman[32345]: Oct 5 03:45:03 localhost podman[32345]: 2025-10-05 07:45:03.372518367 +0000 UTC m=+0.073553868 container create f2d57745da259923994760736848cc3ba267296c2a20a548fa3daf08e7fcea67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4, vcs-type=git, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:45:03 localhost podman[32345]: 2025-10-05 07:45:03.342334261 +0000 UTC m=+0.043369782 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5299a93c5c9c4548995be2da5b1696ce42f8318000e2b1a16cd4b3661162b41/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5299a93c5c9c4548995be2da5b1696ce42f8318000e2b1a16cd4b3661162b41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5299a93c5c9c4548995be2da5b1696ce42f8318000e2b1a16cd4b3661162b41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5299a93c5c9c4548995be2da5b1696ce42f8318000e2b1a16cd4b3661162b41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5299a93c5c9c4548995be2da5b1696ce42f8318000e2b1a16cd4b3661162b41/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost podman[32345]: 2025-10-05 07:45:03.528617008 +0000 UTC m=+0.229652519 container init f2d57745da259923994760736848cc3ba267296c2a20a548fa3daf08e7fcea67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12) Oct 5 03:45:03 localhost podman[32345]: 2025-10-05 07:45:03.553043138 +0000 UTC m=+0.254078659 container start f2d57745da259923994760736848cc3ba267296c2a20a548fa3daf08e7fcea67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:45:03 localhost bash[32345]: f2d57745da259923994760736848cc3ba267296c2a20a548fa3daf08e7fcea67 Oct 5 03:45:03 localhost systemd[1]: Started Ceph osd.4 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:45:03 localhost ceph-osd[32364]: set uid:gid to 167:167 (ceph:ceph) Oct 5 03:45:03 localhost ceph-osd[32364]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Oct 5 03:45:03 localhost ceph-osd[32364]: pidfile_write: ignore empty --pid-file Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:03 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:03 localhost ceph-osd[32364]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) close Oct 5 03:45:03 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) close Oct 5 03:45:04 localhost ceph-osd[32364]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal Oct 5 03:45:04 localhost ceph-osd[32364]: load: jerasure load: lrc Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) close Oct 5 03:45:04 localhost podman[32449]: Oct 5 03:45:04 localhost podman[32449]: 2025-10-05 07:45:04.353970781 +0000 UTC m=+0.085405379 container create 9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_keller, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, name=rhceph, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=7) Oct 5 03:45:04 localhost systemd[1]: Started libpod-conmon-9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab.scope. Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) close Oct 5 03:45:04 localhost podman[32449]: 2025-10-05 07:45:04.317504775 +0000 UTC m=+0.048939434 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:04 localhost systemd[1]: Started libcrun container. Oct 5 03:45:04 localhost podman[32449]: 2025-10-05 07:45:04.45287112 +0000 UTC m=+0.184305718 container init 9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_keller, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public) Oct 5 03:45:04 localhost wonderful_keller[32465]: 167 167 Oct 5 03:45:04 localhost systemd[1]: libpod-9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab.scope: Deactivated successfully. Oct 5 03:45:04 localhost podman[32449]: 2025-10-05 07:45:04.475891724 +0000 UTC m=+0.207326322 container start 9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_keller, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, release=553, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc.) Oct 5 03:45:04 localhost podman[32449]: 2025-10-05 07:45:04.476191532 +0000 UTC m=+0.207626130 container attach 9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_keller, GIT_BRANCH=main, release=553, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Oct 5 03:45:04 localhost podman[32449]: 2025-10-05 07:45:04.479920017 +0000 UTC m=+0.211354675 container died 9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_keller, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main) Oct 5 03:45:04 localhost systemd[1]: var-lib-containers-storage-overlay-b865336c1968ab3e9974d9f0904da035c5d8c516e694f1e7a0de669612f31111-merged.mount: Deactivated successfully. Oct 5 03:45:04 localhost podman[32474]: 2025-10-05 07:45:04.580102879 +0000 UTC m=+0.101508578 container remove 9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_keller, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, RELEASE=main, ceph=True) Oct 5 03:45:04 localhost systemd[1]: libpod-conmon-9cb89381ef3ed12d8089f3be9db0d9b3b8e63229c0d67bfadf2fe4a5e2fbbcab.scope: Deactivated successfully. Oct 5 03:45:04 localhost ceph-osd[32364]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Oct 5 03:45:04 localhost ceph-osd[32364]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7ce00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs mount Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs mount shared_bdev_used = 0 Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Git sha 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: DB SUMMARY Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: DB Session ID: GVWS2EZ3PRSPXFG2Q4DR Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.env: 0x55eb8a010cb0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.info_log: 0x55eb8ad08380 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.statistics: (nil) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.db_log_dir: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_manager: 0x55eb89d66140 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.row_cache: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Compression algorithms supported: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d54850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08760)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08760)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad08760)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8c441dfe-5e49-42b0-b980-3bf1a215279c Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650304732540, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650304732758, "job": 1, "event": "recovery_finished"} Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025 Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240 Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000 Oct 5 03:45:04 localhost ceph-osd[32364]: freelist init Oct 5 03:45:04 localhost ceph-osd[32364]: freelist _read_cfg Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs umount Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) close Oct 5 03:45:04 localhost podman[32532]: Oct 5 03:45:04 localhost podman[32532]: 2025-10-05 07:45:04.805567979 +0000 UTC m=+0.074489590 container create 61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_chaplygin, maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:45:04 localhost systemd[1]: Started libpod-conmon-61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590.scope. Oct 5 03:45:04 localhost podman[32532]: 2025-10-05 07:45:04.770000156 +0000 UTC m=+0.038921737 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:04 localhost systemd[1]: Started libcrun container. Oct 5 03:45:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d352063137752d8e9ed3ea0be4b84d42b1e67e3cb20c903976dcc581d7ce1080/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d352063137752d8e9ed3ea0be4b84d42b1e67e3cb20c903976dcc581d7ce1080/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d352063137752d8e9ed3ea0be4b84d42b1e67e3cb20c903976dcc581d7ce1080/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:04 localhost podman[32532]: 2025-10-05 07:45:04.920732821 +0000 UTC m=+0.189654432 container init 61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_chaplygin, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 5 03:45:04 localhost podman[32532]: 2025-10-05 07:45:04.930701885 +0000 UTC m=+0.199623466 container start 61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_chaplygin, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git) Oct 5 03:45:04 localhost podman[32532]: 2025-10-05 07:45:04.931023153 +0000 UTC m=+0.199944754 container attach 61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_chaplygin, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32364]: bdev(0x55eb89d7d180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs mount Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:04 localhost ceph-osd[32364]: bluefs mount shared_bdev_used = 4718592 Oct 5 03:45:04 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Git sha 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: DB SUMMARY Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: DB Session ID: GVWS2EZ3PRSPXFG2Q4DQ Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.env: 0x55eb8a011dc0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.info_log: 0x55eb8ad09ca0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.statistics: (nil) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.db_log_dir: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_manager: 0x55eb89d66140 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.row_cache: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Compression algorithms supported: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8ad09e80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d542d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8add80c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d55610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8add80c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d55610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eb8add80c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55eb89d55610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8c441dfe-5e49-42b0-b980-3bf1a215279c Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305013271, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305027737, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8c441dfe-5e49-42b0-b980-3bf1a215279c", "db_session_id": "GVWS2EZ3PRSPXFG2Q4DQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305032467, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8c441dfe-5e49-42b0-b980-3bf1a215279c", "db_session_id": "GVWS2EZ3PRSPXFG2Q4DQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305042175, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8c441dfe-5e49-42b0-b980-3bf1a215279c", "db_session_id": "GVWS2EZ3PRSPXFG2Q4DQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305056167, "job": 1, "event": "recovery_finished"} Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55eb89e16700 Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: DB pointer 0x55eb8ac55a00 Oct 5 03:45:05 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:05 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4 Oct 5 03:45:05 localhost ceph-osd[32364]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 460.80 MB usag Oct 5 03:45:05 localhost ceph-osd[32364]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Oct 5 03:45:05 localhost ceph-osd[32364]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Oct 5 03:45:05 localhost ceph-osd[32364]: _get_class not permitted to load lua Oct 5 03:45:05 localhost ceph-osd[32364]: _get_class not permitted to load sdk Oct 5 03:45:05 localhost ceph-osd[32364]: _get_class not permitted to load test_remote_reads Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 load_pgs Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 load_pgs opened 0 pgs Oct 5 03:45:05 localhost ceph-osd[32364]: osd.4 0 log_to_monitors true Oct 5 03:45:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4[32360]: 2025-10-05T07:45:05.121+0000 7f4ea1507a80 -1 osd.4 0 log_to_monitors true Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.617 iops: 6301.878 elapsed_sec: 0.476 Oct 5 03:45:05 localhost ceph-osd[31409]: log_channel(cluster) log [WRN] : OSD bench result of 6301.878257 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Oct 5 03:45:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1[31405]: 2025-10-05T07:45:05.263+0000 7f672e55d640 -1 osd.1 0 waiting for initial osdmap Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 0 waiting for initial osdmap Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 crush map has features 288514050185494528, adjusting msgr requires for clients Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 crush map has features 3314932999778484224, adjusting msgr requires for osds Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 set_numa_affinity not setting numa affinity Oct 5 03:45:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-1[31405]: 2025-10-05T07:45:05.280+0000 7f6729372640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:05 localhost ceph-osd[31409]: osd.1 11 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: { Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "6e8ec8b7-b90f-4f54-843d-de869bc345c2": { Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "ceph_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "osd_id": 1, Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "osd_uuid": "6e8ec8b7-b90f-4f54-843d-de869bc345c2", Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "type": "bluestore" Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: }, Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "73c62746-a7e0-43c7-afb1-d0d460437f43": { Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "ceph_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "osd_id": 4, Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "osd_uuid": "73c62746-a7e0-43c7-afb1-d0d460437f43", Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: "type": "bluestore" Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: } Oct 5 03:45:05 localhost flamboyant_chaplygin[32704]: } Oct 5 03:45:05 localhost systemd[1]: libpod-61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590.scope: Deactivated successfully. Oct 5 03:45:05 localhost podman[32532]: 2025-10-05 07:45:05.596213511 +0000 UTC m=+0.865135102 container died 61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_chaplygin, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, release=553, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:45:05 localhost systemd[1]: tmp-crun.uyJWii.mount: Deactivated successfully. Oct 5 03:45:05 localhost systemd[1]: var-lib-containers-storage-overlay-d352063137752d8e9ed3ea0be4b84d42b1e67e3cb20c903976dcc581d7ce1080-merged.mount: Deactivated successfully. Oct 5 03:45:05 localhost podman[32956]: 2025-10-05 07:45:05.702516488 +0000 UTC m=+0.091684886 container remove 61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_chaplygin, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, version=7, release=553, io.openshift.expose-services=) Oct 5 03:45:05 localhost systemd[1]: libpod-conmon-61e00b0c38aa137c51a787b5b18900e5106080aa5c3fefdf767164ac1bbf7590.scope: Deactivated successfully. Oct 5 03:45:06 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Oct 5 03:45:06 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 done with init, starting boot process Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 start_boot Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1 Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Oct 5 03:45:06 localhost ceph-osd[32364]: osd.4 0 bench count 12288000 bsize 4 KiB Oct 5 03:45:06 localhost ceph-osd[31409]: osd.1 12 state: booting -> active Oct 5 03:45:07 localhost systemd[26085]: Starting Mark boot as successful... Oct 5 03:45:07 localhost podman[33084]: 2025-10-05 07:45:07.506190465 +0000 UTC m=+0.082053723 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:45:07 localhost systemd[26085]: Finished Mark boot as successful. Oct 5 03:45:07 localhost podman[33084]: 2025-10-05 07:45:07.589213642 +0000 UTC m=+0.165076870 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:45:08 localhost ceph-osd[31409]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients Oct 5 03:45:08 localhost ceph-osd[31409]: osd.1 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Oct 5 03:45:08 localhost ceph-osd[31409]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.171 iops: 5163.898 elapsed_sec: 0.581 Oct 5 03:45:09 localhost ceph-osd[32364]: log_channel(cluster) log [WRN] : OSD bench result of 5163.897813 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 0 waiting for initial osdmap Oct 5 03:45:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4[32360]: 2025-10-05T07:45:09.104+0000 7f4e9dc9b640 -1 osd.4 0 waiting for initial osdmap Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 crush map has features 288514051259236352, adjusting msgr requires for clients Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 crush map has features 3314933000852226048, adjusting msgr requires for osds Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 check_osdmap_features require_osd_release unknown -> reef Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 set_numa_affinity not setting numa affinity Oct 5 03:45:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-4[32360]: 2025-10-05T07:45:09.130+0000 7f4e98ab0640 -1 osd.4 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 14 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Oct 5 03:45:09 localhost ceph-osd[32364]: osd.4 15 state: booting -> active Oct 5 03:45:09 localhost podman[33281]: Oct 5 03:45:09 localhost podman[33281]: 2025-10-05 07:45:09.703014318 +0000 UTC m=+0.076781739 container create 46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bohr, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container) Oct 5 03:45:09 localhost systemd[1]: Started libpod-conmon-46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5.scope. Oct 5 03:45:09 localhost systemd[1]: Started libcrun container. Oct 5 03:45:09 localhost podman[33281]: 2025-10-05 07:45:09.780142416 +0000 UTC m=+0.153909837 container init 46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bohr, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, release=553, name=rhceph, architecture=x86_64, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:45:09 localhost podman[33281]: 2025-10-05 07:45:09.68139418 +0000 UTC m=+0.055161611 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:09 localhost naughty_bohr[33296]: 167 167 Oct 5 03:45:09 localhost podman[33281]: 2025-10-05 07:45:09.791597746 +0000 UTC m=+0.165365187 container start 46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bohr, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., release=553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:45:09 localhost systemd[1]: libpod-46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5.scope: Deactivated successfully. Oct 5 03:45:09 localhost podman[33281]: 2025-10-05 07:45:09.792680673 +0000 UTC m=+0.166448164 container attach 46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bohr, version=7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:45:09 localhost podman[33281]: 2025-10-05 07:45:09.794961811 +0000 UTC m=+0.168729252 container died 46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bohr, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, maintainer=Guillaume Abrioux ) Oct 5 03:45:09 localhost systemd[1]: var-lib-containers-storage-overlay-0acf0394d18615d8a342bb2a79cd41b11aeb145415a6b517242f780269c75edd-merged.mount: Deactivated successfully. Oct 5 03:45:09 localhost podman[33301]: 2025-10-05 07:45:09.876392618 +0000 UTC m=+0.076925643 container remove 46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bohr, name=rhceph, release=553, ceph=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main) Oct 5 03:45:09 localhost systemd[1]: libpod-conmon-46baf95733bf5c05b50b91f0a50826580332e9fe3021a01f746a5ae172f3ccf5.scope: Deactivated successfully. Oct 5 03:45:10 localhost podman[33321]: Oct 5 03:45:10 localhost podman[33321]: 2025-10-05 07:45:10.051086361 +0000 UTC m=+0.079094479 container create 7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wright, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, release=553, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Oct 5 03:45:10 localhost systemd[1]: Started libpod-conmon-7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22.scope. Oct 5 03:45:10 localhost systemd[1]: Started libcrun container. Oct 5 03:45:10 localhost podman[33321]: 2025-10-05 07:45:10.020656979 +0000 UTC m=+0.048665137 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db00730a8eb27174ae3869b2834db77755f816a7632eee440dfbfc2274d4a1e9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db00730a8eb27174ae3869b2834db77755f816a7632eee440dfbfc2274d4a1e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db00730a8eb27174ae3869b2834db77755f816a7632eee440dfbfc2274d4a1e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:10 localhost podman[33321]: 2025-10-05 07:45:10.164247132 +0000 UTC m=+0.192255260 container init 7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wright, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, distribution-scope=public) Oct 5 03:45:10 localhost podman[33321]: 2025-10-05 07:45:10.175473377 +0000 UTC m=+0.203481505 container start 7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wright, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55) Oct 5 03:45:10 localhost podman[33321]: 2025-10-05 07:45:10.175715493 +0000 UTC m=+0.203723611 container attach 7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wright, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, version=7, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, architecture=x86_64) Oct 5 03:45:10 localhost ceph-osd[32364]: osd.4 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=15) [2,0,4] r=2 lpr=15 pi=[14,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 03:45:11 localhost interesting_wright[33337]: [ Oct 5 03:45:11 localhost interesting_wright[33337]: { Oct 5 03:45:11 localhost interesting_wright[33337]: "available": false, Oct 5 03:45:11 localhost interesting_wright[33337]: "ceph_device": false, Oct 5 03:45:11 localhost interesting_wright[33337]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 03:45:11 localhost interesting_wright[33337]: "lsm_data": {}, Oct 5 03:45:11 localhost interesting_wright[33337]: "lvs": [], Oct 5 03:45:11 localhost interesting_wright[33337]: "path": "/dev/sr0", Oct 5 03:45:11 localhost interesting_wright[33337]: "rejected_reasons": [ Oct 5 03:45:11 localhost interesting_wright[33337]: "Has a FileSystem", Oct 5 03:45:11 localhost interesting_wright[33337]: "Insufficient space (<5GB)" Oct 5 03:45:11 localhost interesting_wright[33337]: ], Oct 5 03:45:11 localhost interesting_wright[33337]: "sys_api": { Oct 5 03:45:11 localhost interesting_wright[33337]: "actuators": null, Oct 5 03:45:11 localhost interesting_wright[33337]: "device_nodes": "sr0", Oct 5 03:45:11 localhost interesting_wright[33337]: "human_readable_size": "482.00 KB", Oct 5 03:45:11 localhost interesting_wright[33337]: "id_bus": "ata", Oct 5 03:45:11 localhost interesting_wright[33337]: "model": "QEMU DVD-ROM", Oct 5 03:45:11 localhost interesting_wright[33337]: "nr_requests": "2", Oct 5 03:45:11 localhost interesting_wright[33337]: "partitions": {}, Oct 5 03:45:11 localhost interesting_wright[33337]: "path": "/dev/sr0", Oct 5 03:45:11 localhost interesting_wright[33337]: "removable": "1", Oct 5 03:45:11 localhost interesting_wright[33337]: "rev": "2.5+", Oct 5 03:45:11 localhost interesting_wright[33337]: "ro": "0", Oct 5 03:45:11 localhost interesting_wright[33337]: "rotational": "1", Oct 5 03:45:11 localhost interesting_wright[33337]: "sas_address": "", Oct 5 03:45:11 localhost interesting_wright[33337]: "sas_device_handle": "", Oct 5 03:45:11 localhost interesting_wright[33337]: "scheduler_mode": "mq-deadline", Oct 5 03:45:11 localhost interesting_wright[33337]: "sectors": 0, Oct 5 03:45:11 localhost interesting_wright[33337]: "sectorsize": "2048", Oct 5 03:45:11 localhost interesting_wright[33337]: "size": 493568.0, Oct 5 03:45:11 localhost interesting_wright[33337]: "support_discard": "0", Oct 5 03:45:11 localhost interesting_wright[33337]: "type": "disk", Oct 5 03:45:11 localhost interesting_wright[33337]: "vendor": "QEMU" Oct 5 03:45:11 localhost interesting_wright[33337]: } Oct 5 03:45:11 localhost interesting_wright[33337]: } Oct 5 03:45:11 localhost interesting_wright[33337]: ] Oct 5 03:45:11 localhost systemd[1]: libpod-7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22.scope: Deactivated successfully. Oct 5 03:45:11 localhost podman[33321]: 2025-10-05 07:45:11.150938759 +0000 UTC m=+1.178946887 container died 7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wright, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, GIT_CLEAN=True, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=) Oct 5 03:45:11 localhost systemd[1]: var-lib-containers-storage-overlay-db00730a8eb27174ae3869b2834db77755f816a7632eee440dfbfc2274d4a1e9-merged.mount: Deactivated successfully. Oct 5 03:45:11 localhost podman[34555]: 2025-10-05 07:45:11.248769581 +0000 UTC m=+0.087992433 container remove 7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wright, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, release=553, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.33.12, name=rhceph) Oct 5 03:45:11 localhost systemd[1]: libpod-conmon-7ce03bdd16ba6c4c66a678f18b35561fe47e3878d2f993f3e3d47a121ad5ff22.scope: Deactivated successfully. Oct 5 03:45:20 localhost podman[34683]: 2025-10-05 07:45:20.763186323 +0000 UTC m=+0.087853510 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=553, io.buildah.version=1.33.12, RELEASE=main) Oct 5 03:45:20 localhost podman[34683]: 2025-10-05 07:45:20.868983998 +0000 UTC m=+0.193651225 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:45:39 localhost sshd[34764]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:46:22 localhost podman[34868]: 2025-10-05 07:46:22.641048685 +0000 UTC m=+0.085756171 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-09-24T08:57:55, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=) Oct 5 03:46:22 localhost podman[34868]: 2025-10-05 07:46:22.770861471 +0000 UTC m=+0.215568967 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, version=7) Oct 5 03:46:27 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Oct 5 03:46:27 localhost systemd[1]: session-14.scope: Deactivated successfully. Oct 5 03:46:27 localhost systemd[1]: session-14.scope: Consumed 22.091s CPU time. Oct 5 03:46:27 localhost systemd-logind[760]: Removed session 14. Oct 5 03:48:10 localhost systemd[26085]: Created slice User Background Tasks Slice. Oct 5 03:48:10 localhost systemd[26085]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 03:48:10 localhost systemd[26085]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 03:49:46 localhost sshd[35243]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:49:46 localhost systemd-logind[760]: New session 28 of user zuul. Oct 5 03:49:46 localhost systemd[1]: Started Session 28 of User zuul. Oct 5 03:49:46 localhost python3[35291]: ansible-ansible.legacy.ping Invoked with data=pong Oct 5 03:49:47 localhost python3[35336]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:49:48 localhost python3[35356]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471150.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 5 03:49:48 localhost python3[35412]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:49:49 localhost python3[35455]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759650588.5417051-66994-146200591620099/source _original_basename=tmpb5wb6uuv follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:49 localhost python3[35485]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:50 localhost python3[35501]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:50 localhost python3[35517]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:51 localhost python3[35533]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:51 localhost python3[35547]: ansible-ping Invoked with data=pong Oct 5 03:50:02 localhost sshd[35549]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:50:02 localhost systemd-logind[760]: New session 29 of user tripleo-admin. Oct 5 03:50:02 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 5 03:50:02 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 5 03:50:02 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 5 03:50:02 localhost systemd[1]: Starting User Manager for UID 1003... Oct 5 03:50:02 localhost systemd[35553]: Queued start job for default target Main User Target. Oct 5 03:50:02 localhost systemd[35553]: Created slice User Application Slice. Oct 5 03:50:02 localhost systemd[35553]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 03:50:02 localhost systemd[35553]: Started Daily Cleanup of User's Temporary Directories. Oct 5 03:50:02 localhost systemd[35553]: Reached target Paths. Oct 5 03:50:02 localhost systemd[35553]: Reached target Timers. Oct 5 03:50:02 localhost systemd[35553]: Starting D-Bus User Message Bus Socket... Oct 5 03:50:02 localhost systemd[35553]: Starting Create User's Volatile Files and Directories... Oct 5 03:50:02 localhost systemd[35553]: Finished Create User's Volatile Files and Directories. Oct 5 03:50:02 localhost systemd[35553]: Listening on D-Bus User Message Bus Socket. Oct 5 03:50:02 localhost systemd[35553]: Reached target Sockets. Oct 5 03:50:02 localhost systemd[35553]: Reached target Basic System. Oct 5 03:50:02 localhost systemd[35553]: Reached target Main User Target. Oct 5 03:50:02 localhost systemd[35553]: Startup finished in 119ms. Oct 5 03:50:02 localhost systemd[1]: Started User Manager for UID 1003. Oct 5 03:50:02 localhost systemd[1]: Started Session 29 of User tripleo-admin. Oct 5 03:50:03 localhost python3[35614]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:50:08 localhost python3[35634]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Oct 5 03:50:08 localhost python3[35650]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Oct 5 03:50:09 localhost python3[35698]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.40m95znxtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:09 localhost python3[35728]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.40m95znxtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:10 localhost python3[35744]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.40m95znxtmphosts insertbefore=BOF block=172.17.0.106 np0005471150.localdomain np0005471150#012172.18.0.106 np0005471150.storage.localdomain np0005471150.storage#012172.20.0.106 np0005471150.storagemgmt.localdomain np0005471150.storagemgmt#012172.17.0.106 np0005471150.internalapi.localdomain np0005471150.internalapi#012172.19.0.106 np0005471150.tenant.localdomain np0005471150.tenant#012192.168.122.106 np0005471150.ctlplane.localdomain np0005471150.ctlplane#012172.17.0.107 np0005471151.localdomain np0005471151#012172.18.0.107 np0005471151.storage.localdomain np0005471151.storage#012172.20.0.107 np0005471151.storagemgmt.localdomain np0005471151.storagemgmt#012172.17.0.107 np0005471151.internalapi.localdomain np0005471151.internalapi#012172.19.0.107 np0005471151.tenant.localdomain np0005471151.tenant#012192.168.122.107 np0005471151.ctlplane.localdomain np0005471151.ctlplane#012172.17.0.108 np0005471152.localdomain np0005471152#012172.18.0.108 np0005471152.storage.localdomain np0005471152.storage#012172.20.0.108 np0005471152.storagemgmt.localdomain np0005471152.storagemgmt#012172.17.0.108 np0005471152.internalapi.localdomain np0005471152.internalapi#012172.19.0.108 np0005471152.tenant.localdomain np0005471152.tenant#012192.168.122.108 np0005471152.ctlplane.localdomain np0005471152.ctlplane#012172.17.0.103 np0005471146.localdomain np0005471146#012172.18.0.103 np0005471146.storage.localdomain np0005471146.storage#012172.20.0.103 np0005471146.storagemgmt.localdomain np0005471146.storagemgmt#012172.17.0.103 np0005471146.internalapi.localdomain np0005471146.internalapi#012172.19.0.103 np0005471146.tenant.localdomain np0005471146.tenant#012192.168.122.103 np0005471146.ctlplane.localdomain np0005471146.ctlplane#012172.17.0.104 np0005471147.localdomain np0005471147#012172.18.0.104 np0005471147.storage.localdomain np0005471147.storage#012172.20.0.104 np0005471147.storagemgmt.localdomain np0005471147.storagemgmt#012172.17.0.104 np0005471147.internalapi.localdomain np0005471147.internalapi#012172.19.0.104 np0005471147.tenant.localdomain np0005471147.tenant#012192.168.122.104 np0005471147.ctlplane.localdomain np0005471147.ctlplane#012172.17.0.105 np0005471148.localdomain np0005471148#012172.18.0.105 np0005471148.storage.localdomain np0005471148.storage#012172.20.0.105 np0005471148.storagemgmt.localdomain np0005471148.storagemgmt#012172.17.0.105 np0005471148.internalapi.localdomain np0005471148.internalapi#012172.19.0.105 np0005471148.tenant.localdomain np0005471148.tenant#012192.168.122.105 np0005471148.ctlplane.localdomain np0005471148.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.178 overcloud.storage.localdomain#012172.20.0.167 overcloud.storagemgmt.localdomain#012172.17.0.227 overcloud.internalapi.localdomain#012172.21.0.204 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:11 localhost python3[35760]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.40m95znxtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:50:11 localhost python3[35777]: ansible-file Invoked with path=/tmp/ansible.40m95znxtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:12 localhost python3[35793]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:50:14 localhost python3[35810]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:50:18 localhost python3[35829]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:50:19 localhost python3[35846]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:51:24 localhost sshd[36810]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:51:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=5 res=1 Oct 5 03:51:30 localhost kernel: SELinux: Converting 2700 SID table entries... Oct 5 03:51:30 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:51:30 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:51:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=6 res=1 Oct 5 03:51:31 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:51:31 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:51:31 localhost systemd[1]: Reloading. Oct 5 03:51:31 localhost systemd-rc-local-generator[37018]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:51:31 localhost systemd-sysv-generator[37023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:51:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:51:31 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:51:31 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:51:31 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:51:31 localhost systemd[1]: run-r2b1aaf64d744489a8a40b5855cf2013a.service: Deactivated successfully. Oct 5 03:51:31 localhost sshd[37446]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:51:32 localhost python3[37463]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:34 localhost python3[37602]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:51:34 localhost systemd[1]: Reloading. Oct 5 03:51:34 localhost systemd-rc-local-generator[37629]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:51:34 localhost systemd-sysv-generator[37635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:51:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:51:35 localhost python3[37656]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:35 localhost python3[37672]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:36 localhost python3[37689]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 03:51:36 localhost python3[37707]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:37 localhost python3[37725]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:37 localhost python3[37743]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:51:38 localhost systemd[1]: Reloading Network Manager... Oct 5 03:51:38 localhost NetworkManager[5981]: [1759650698.8618] audit: op="reload" arg="0" pid=37746 uid=0 result="success" Oct 5 03:51:38 localhost NetworkManager[5981]: [1759650698.8633] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Oct 5 03:51:38 localhost NetworkManager[5981]: [1759650698.8634] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Oct 5 03:51:38 localhost systemd[1]: Reloaded Network Manager. Oct 5 03:51:39 localhost python3[37762]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:39 localhost python3[37779]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:40 localhost python3[37797]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:40 localhost python3[37813]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:41 localhost python3[37829]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Oct 5 03:51:41 localhost python3[37845]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:42 localhost python3[37861]: ansible-blockinfile Invoked with path=/tmp/ansible.vbh8enzv block=[192.168.122.106]*,[np0005471150.ctlplane.localdomain]*,[172.17.0.106]*,[np0005471150.internalapi.localdomain]*,[172.18.0.106]*,[np0005471150.storage.localdomain]*,[172.20.0.106]*,[np0005471150.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005471150.tenant.localdomain]*,[np0005471150.localdomain]*,[np0005471150]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCT5ftkzxR2Qyrkv4Bog+udHavLt9s9Di0AWsGW2RuyQQiM22RbERlEwcEpl46d2UZEA/h4vz9TbE4fxIRY43XsuoO7kScaRsaDEk80scoEanpXJXpL99y+HtDr7IiFnp920RFZWAvClhPuG5f4GTZcAH8JwlQdHLoU08owfBRpfZmDNZcoyX0tprcWQCD7KMlzpxwZFqhjkJVPrnq3lxWA9cG87b9CDA6sHuH8h4RYjBBtCOkxgTVQgBjGVWWjO64RQXgkKPObBX3sBjTYorcuu5af6cl8pwRuWCIDiskwHVqEvsdx7nXa+8le2b250IQoHti8LislYbkhX/LUO0TmKGbvUuzaK3gsuRGLxf+qG4UdCa7CYecLosB0sg0pv7c95e80sFtLwEFyKvUkMfEdbFIxMr03gd1i6lSeafCtY9Xk0sjkbJpMGaj2hsNlv1S6X8taFEHFuQyDEZ3ZkQXwxYkb0pqUef9Fn6d2VvlP4u7GHH+iQZtgv7NZrxvZOos=#012[192.168.122.107]*,[np0005471151.ctlplane.localdomain]*,[172.17.0.107]*,[np0005471151.internalapi.localdomain]*,[172.18.0.107]*,[np0005471151.storage.localdomain]*,[172.20.0.107]*,[np0005471151.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005471151.tenant.localdomain]*,[np0005471151.localdomain]*,[np0005471151]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeDNxXs+ZUIP9/a2zVFllXGXsP2/RtUXLMLDP4YL71gvVrRf+MpnYrvCNPSMtaio8hFnrpiDFXxbT/vT8cGaq0VtYxjMm6ggMMEpJTsx2xG5zkDW3nbKnfBWdlrf2h3+WUBHOB9mofrB5CT0cuNDshy8Zq3cPyqMZVPdJXPIH+fsWD+b65aHwAk93ThJehxt/nPEDADcRKHLYFTlAyvnZ5aEvqj714SQIjwLcSkgaTfu3JmjF9FllzZz3DKBld7fRbggrz2rkww5yxrvj9W/KsoSugYq1N+fEEWdUonP/PYnRfJ9Qe+OMV5TmEEYuUOqPqaVs8vMZI4zYb3l5asdknHsN0N3URQbZANs9Fettfh3uoOPlyegvPjIMukQ8KZAy+KQWSAzho7RnR5ULuWVNi7Rj9mFC01wy0778Zqb7BlWc+Yn3kNXEkR9u1vQjBq7B+Ie922b6pYARzXmaE2yjzI7QdYo1IB/o9UIP/zEfugki+28qB0215MGXrk3EqTk8=#012[192.168.122.108]*,[np0005471152.ctlplane.localdomain]*,[172.17.0.108]*,[np0005471152.internalapi.localdomain]*,[172.18.0.108]*,[np0005471152.storage.localdomain]*,[172.20.0.108]*,[np0005471152.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005471152.tenant.localdomain]*,[np0005471152.localdomain]*,[np0005471152]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQL9bjzo5YAISp2Bxwtb4g1hALXPqelm3WBGwGfh3/tRyDvnxqpgAH4BkgnyM92vRVDUZgylBjfJ54aevQzR0sxDWI5un2tTEepezxrrMvJNDvOss/fCLi88oah/o3qw++j3XWh7zZNBR2ZlXoM/pIxbee1SynEGOX2B0csXrd1qrshg6L4eHx3xP0RwAulzm5seEcMLqx8KH2dq77wY0VqQkpaFyFb7FqX77rxq/UKPpgE0srhO8SRvE9De5pNe/qOciIyF6dgzu5EyyHu7KYjTILbMKxDa32WE/P2Rf7vIscc9uCS7JGMjSz6NeeFnpRpsv8N/pMUGyuUGsD1ZchAk2FVF+E5cZtF04URyBXHR3aMjxItV46eMTahkYu0ieB5XIe1ht+1mpTNW5HuK+c5IGVa1+5Y3udf7NKVNLxbJKJpiyb1+mVhhrwPzJFaIuMT3y2IHiF3xGDIof8BMBzvhUW/T0WYISPRdb3hpP5yODYfEz7Mmnpe6mZj+mFVVc=#012[192.168.122.103]*,[np0005471146.ctlplane.localdomain]*,[172.17.0.103]*,[np0005471146.internalapi.localdomain]*,[172.18.0.103]*,[np0005471146.storage.localdomain]*,[172.20.0.103]*,[np0005471146.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005471146.tenant.localdomain]*,[np0005471146.localdomain]*,[np0005471146]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB7OvQtGFS2ddbuT67PLzOZMMKExXKgLGlJbGmtwnZie42R//csfGTuDcY5sTL5gAKr5LgWtvuSJPxC5H8l1UXw+Jr1ot425wmg47AIcheuJNQqzQ7tPAGH3PICnVC6aPHAOVRVF+gH7UOtvdgmSE7iMATMRPcUy2tqR8NCuKKvzDeS/2RQXJpgWok3C9RwXiVS5oUv9jUyevFtgntUOYojmdQgQKC7AwBkYfT7TF3CJZYryU/VVFtwd7a/UiSCw5QLoTN8NxCyROZfFtmylvUybp8RdUroQiriJw1zcQyVLsXbwq0clpb5hc+/3tQLZv3a6JrVpp5DZq+MW98UkErXy11sX4Mk9e2seewM0xMkdGzMReNlZqtUWLIISbhxkBby9gn3WRKG32HdCCSD66ZhNAfOCfpaO3dNiCRUyzYoh4WRF7pu7nwBQ/eTQp8SGptdGGHUf0XF9tqRWjj2nrVrHHOnbj/9clk9VdTU6dbcxFoz3X5SWbovR40rDPz6e0=#012[192.168.122.104]*,[np0005471147.ctlplane.localdomain]*,[172.17.0.104]*,[np0005471147.internalapi.localdomain]*,[172.18.0.104]*,[np0005471147.storage.localdomain]*,[172.20.0.104]*,[np0005471147.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005471147.tenant.localdomain]*,[np0005471147.localdomain]*,[np0005471147]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz7dSoZhAVsu7Q6pQ5T0a3vdxjM8VsWq083YCwmW5ZBuWxtpO+ywiBUZXF2GXQh83uhFPjTL6AVFeIX5lNLPi70M1qL6Twe/O2mk2gSzlx225JQnN98IGNIaiWFoDWJeh+QC5ahKjsZLMqt7JQaJMEu8Y+pNNhDzn+mrA5SQL/4KeoVuUMVnHW606U26xi/2P8WkxBdjPuLtDQdFdmprrS1/lNbxCAMj0MhrqsxbpX9uLe04KqrNXmsaTlvu+XKlf2y7mxaihY81Qbyf86Guw2DS8EIhDZjC2olPxoqJJn5ZAGtvtc/FzkH/pbbMy1CbD6OnTFGsUHbZKS9eBF7PtpLp3YiUp/FyRfiyxmtelUycYx7bqdixnmEGj4O2Ju2ehdpxO1RyBRyrfUelVA8bfBft6yd41RwKwujj5OtnOXzqb7I8O83ZgbDm6oUjTG+59hElsoR3PI5ow3C3NTrDQxwesLfuTjCrjHCWnvKIQb51xqtNRDT8PTStx27/FxOJ0=#012[192.168.122.105]*,[np0005471148.ctlplane.localdomain]*,[172.17.0.105]*,[np0005471148.internalapi.localdomain]*,[172.18.0.105]*,[np0005471148.storage.localdomain]*,[172.20.0.105]*,[np0005471148.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005471148.tenant.localdomain]*,[np0005471148.localdomain]*,[np0005471148]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCav0eZ81SP1lgxNKp8kzS2MGddVZXD3CnfZarlQErB75DRL4T/NvcVXnfxKn4UPX+h1zwIlKhrD0kHzKTVqifYPUqAmLb8rYREMTmXhQxto2b7VGPMQJtDAprHqyUEFlSdV8NbN3SVctntX/mSKO9bD06JFfa3F62ItPVHy6SnAKMzgNdSszOdKFvbEzC2oxcehr1uB2BAOIiTb1KxyTjXhvXZSYUsBxiGWPOP83oZQxCJlh/VjIUu6P2F6+mv1415n4ujbEujO8/iVbBF1uy28bTobQfABbfPNDNUCd9Gr+xDlT4JuuYTcjqG+gr3yvctzwj/+lxYcJbC0ZYtRhJ0pu8gjm44UFVFCpPxwPpvkKV5n+jU3uaSX98EZpaTlK51qqfwX29LxmMKs3pezfixQ67KCoq1jcDNXUiZpX9svKFD2Drlx+6s9pBkQGZcsmVNiCKQBJmrpFCgYhAPOEIjAGPkic0qp+pAaJtQpB/gYfF/cNCJmCm80s5s/jRuSOs=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:42 localhost python3[37877]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vbh8enzv' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:43 localhost python3[37895]: ansible-file Invoked with path=/tmp/ansible.vbh8enzv state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:44 localhost python3[37911]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 03:51:44 localhost python3[37927]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:44 localhost python3[37945]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:45 localhost python3[37964]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Oct 5 03:51:47 localhost python3[38101]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:48 localhost python3[38118]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:51:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:51:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:51:51 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:51:51 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:51:51 localhost systemd[1]: Reloading. Oct 5 03:51:51 localhost systemd-rc-local-generator[38185]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:51:51 localhost systemd-sysv-generator[38192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:51:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:51:51 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:51:51 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 5 03:51:51 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 5 03:51:51 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 5 03:51:51 localhost systemd[1]: tuned.service: Consumed 1.764s CPU time. Oct 5 03:51:51 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 03:51:51 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:51:51 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:51:51 localhost systemd[1]: run-r98284175eba544deb95cf694c21f32bb.service: Deactivated successfully. Oct 5 03:51:53 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 03:51:53 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:51:53 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:51:53 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:51:53 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:51:53 localhost systemd[1]: run-rda9462139761483897804bce3ebbacae.service: Deactivated successfully. Oct 5 03:51:54 localhost python3[38555]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:51:54 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 5 03:51:54 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 5 03:51:54 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 5 03:51:54 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 03:51:55 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 03:51:56 localhost python3[38750]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:57 localhost python3[38767]: ansible-slurp Invoked with src=/etc/tuned/active_profile Oct 5 03:51:57 localhost python3[38783]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:58 localhost python3[38799]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:00 localhost python3[38819]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:00 localhost python3[38836]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:52:03 localhost python3[38852]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:04 localhost sshd[38853]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:52:08 localhost python3[38870]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:08 localhost python3[38918]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:09 localhost python3[38963]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650728.612813-71405-279671886913401/source _original_basename=tmpk02tor3d follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:09 localhost python3[38993]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:10 localhost python3[39041]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:10 localhost systemd[35553]: Starting Mark boot as successful... Oct 5 03:52:10 localhost systemd[35553]: Finished Mark boot as successful. Oct 5 03:52:10 localhost python3[39085]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650730.2081554-71629-29861954731054/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=43b29c8557766d8327a1fa06529a284fbedbdaa9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:11 localhost python3[39147]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:11 localhost python3[39190]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650731.1642234-71691-21529234883887/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=48c763e87e973e17d11bd4dcd68a412176c73bf2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:12 localhost python3[39252]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:12 localhost python3[39295]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650732.0272436-71691-186739275465164/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=97e470c59032f2514ad5196642ab40dc0e60ec7a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:13 localhost python3[39357]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:13 localhost python3[39400]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650732.981271-71691-168161196742238/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=68b5a56a66cb10764ef3288009ad5e9b7e8faf12 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:14 localhost python3[39462]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:14 localhost python3[39505]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650733.891213-71691-61070724771187/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:15 localhost python3[39567]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:15 localhost python3[39610]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650734.756863-71691-89339320876808/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=18fc34bcdf4bf8a8e8842f88300f55b23554684d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:15 localhost python3[39672]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:16 localhost python3[39715]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650735.6188107-71691-14810177276120/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:16 localhost python3[39777]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:17 localhost python3[39820]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650736.4088016-71691-143124821134965/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=eec99266e2b532da3b9cbf709d99ea3775a9e36f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:17 localhost python3[39882]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:18 localhost python3[39925]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650737.324812-71691-7439369819303/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:18 localhost python3[39987]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:18 localhost python3[40030]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650738.1945338-71691-215883631499122/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:19 localhost python3[40092]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:19 localhost python3[40135]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650739.055925-71691-61275269875119/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=0fdebf5b956974395ba2d837bd36b7fd21e5a68e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:20 localhost python3[40165]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:52:21 localhost python3[40213]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:21 localhost python3[40256]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650740.995116-72524-134837742174301/source _original_basename=tmphy6tf737 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:26 localhost python3[40286]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 03:52:26 localhost python3[40347]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:31 localhost python3[40364]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:36 localhost python3[40457]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:37 localhost python3[40480]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:37 localhost python3[40503]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:38 localhost python3[40526]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:38 localhost python3[40549]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:21 localhost python3[40572]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:21 localhost python3[40620]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:21 localhost python3[40638]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpuufnibbd recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:22 localhost python3[40668]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:22 localhost python3[40716]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:23 localhost python3[40734]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:23 localhost python3[40796]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:24 localhost python3[40814]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:24 localhost python3[40876]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:24 localhost python3[40894]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:25 localhost python3[40956]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:25 localhost python3[40974]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:26 localhost python3[41036]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:26 localhost python3[41054]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:26 localhost python3[41116]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:27 localhost python3[41134]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:27 localhost python3[41196]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:28 localhost python3[41214]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:28 localhost python3[41276]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:28 localhost python3[41294]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:29 localhost python3[41356]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:29 localhost python3[41374]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:30 localhost python3[41436]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:30 localhost python3[41454]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:30 localhost python3[41516]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:31 localhost python3[41534]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:31 localhost python3[41564]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:53:32 localhost python3[41612]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:32 localhost python3[41630]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmp56j8i4v3 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:35 localhost python3[41722]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:53:39 localhost python3[41754]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:53:41 localhost python3[41772]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:53:41 localhost python3[41790]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:53:41 localhost systemd[1]: Reloading. Oct 5 03:53:41 localhost systemd-sysv-generator[41818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:53:41 localhost systemd-rc-local-generator[41814]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:53:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:53:42 localhost systemd[1]: Starting Netfilter Tables... Oct 5 03:53:42 localhost systemd[1]: Finished Netfilter Tables. Oct 5 03:53:42 localhost python3[41880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:43 localhost python3[41923]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650822.400398-75141-142343948767113/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:43 localhost python3[41953]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:44 localhost python3[41971]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:44 localhost python3[42020]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:44 localhost python3[42063]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650824.1866713-75387-113244022484146/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:45 localhost python3[42125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:45 localhost python3[42168]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650825.132299-75445-248407227350936/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:46 localhost python3[42230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:46 localhost python3[42273]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650826.1695783-75603-196338002681418/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:47 localhost python3[42335]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:47 localhost python3[42378]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650827.079141-75656-124977095346189/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:48 localhost python3[42440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:49 localhost python3[42483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650827.9637861-75734-279181277449829/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:49 localhost python3[42513]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:50 localhost python3[42578]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:50 localhost python3[42595]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:51 localhost python3[42612]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:51 localhost python3[42631]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:51 localhost python3[42647]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:52 localhost python3[42663]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:52 localhost python3[42679]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 5 03:53:53 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=7 res=1 Oct 5 03:53:53 localhost python3[42699]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:53:54 localhost kernel: SELinux: Converting 2704 SID table entries... Oct 5 03:53:54 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:53:54 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:53:54 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:53:54 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:53:54 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:53:54 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:53:54 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:53:54 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=8 res=1 Oct 5 03:53:55 localhost python3[42720]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:53:55 localhost kernel: SELinux: Converting 2704 SID table entries... Oct 5 03:53:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:53:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:53:56 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=9 res=1 Oct 5 03:53:56 localhost python3[42741]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:53:57 localhost kernel: SELinux: Converting 2704 SID table entries... Oct 5 03:53:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:53:57 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:53:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:53:57 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:53:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:53:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:53:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:53:57 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=10 res=1 Oct 5 03:53:57 localhost python3[42763]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:57 localhost python3[42779]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:58 localhost python3[42795]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:58 localhost python3[42811]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:53:58 localhost python3[42827]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:59 localhost python3[42844]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:03 localhost python3[42861]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:04 localhost python3[42909]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:04 localhost python3[42952]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650843.7080762-76563-155670744217850/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:04 localhost python3[42982]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:54:04 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 03:54:04 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 03:54:04 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 03:54:04 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 03:54:05 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 5 03:54:05 localhost kernel: Bridge firewalling registered Oct 5 03:54:05 localhost systemd-modules-load[42985]: Inserted module 'br_netfilter' Oct 5 03:54:05 localhost systemd-modules-load[42985]: Module 'msr' is built in Oct 5 03:54:05 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 03:54:05 localhost python3[43036]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:05 localhost python3[43079]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650845.205363-76606-213780886408375/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:06 localhost python3[43109]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:06 localhost python3[43126]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:06 localhost python3[43144]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:07 localhost python3[43162]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:07 localhost python3[43179]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:07 localhost python3[43196]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[43213]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[43231]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[43249]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:09 localhost python3[43267]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:09 localhost python3[43285]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:09 localhost python3[43303]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:10 localhost python3[43321]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:10 localhost python3[43339]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:10 localhost python3[43356]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:11 localhost python3[43373]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:11 localhost python3[43390]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:11 localhost python3[43407]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:12 localhost python3[43425]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:54:12 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 5 03:54:12 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 5 03:54:12 localhost systemd[1]: Stopping Apply Kernel Variables... Oct 5 03:54:12 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 03:54:12 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 5 03:54:12 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 03:54:12 localhost python3[43445]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:12 localhost python3[43461]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:13 localhost python3[43477]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:13 localhost python3[43493]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:54:14 localhost python3[43509]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:14 localhost python3[43525]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:14 localhost python3[43541]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:14 localhost python3[43557]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:15 localhost python3[43573]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:15 localhost python3[43621]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:16 localhost python3[43664]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650855.3506866-77000-25708130963096/source _original_basename=tmp1a3nhb0z follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:16 localhost python3[43694]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:17 localhost python3[43711]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:18 localhost python3[43759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:18 localhost python3[43802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650858.2327003-77274-16157809045288/source _original_basename=tmp1acu6m9i follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:19 localhost python3[43832]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:19 localhost python3[43848]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:20 localhost python3[43864]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:20 localhost python3[43880]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:20 localhost python3[43896]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:21 localhost python3[43912]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:21 localhost python3[43928]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:21 localhost python3[43944]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:22 localhost python3[43960]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:22 localhost python3[43976]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Oct 5 03:54:22 localhost python3[43998]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471150.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 5 03:54:23 localhost python3[44022]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Oct 5 03:54:23 localhost python3[44038]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:24 localhost python3[44087]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:24 localhost python3[44130]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650863.915685-77600-161386191725932/source _original_basename=tmpviocj8ic follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:25 localhost python3[44160]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Oct 5 03:54:25 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=11 res=1 Oct 5 03:54:26 localhost python3[44181]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:26 localhost python3[44197]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Oct 5 03:54:27 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=12 res=1 Oct 5 03:54:27 localhost python3[44217]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:30 localhost python3[44234]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 03:54:31 localhost python3[44295]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:31 localhost python3[44311]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:32 localhost python3[44371]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:33 localhost python3[44414]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650872.2518487-77960-24294780565161/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=af4f75909b35b88c1eca45aa1227bc59228374c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:33 localhost python3[44476]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:34 localhost python3[44521]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650873.3298292-78015-169973280319327/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:34 localhost python3[44551]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:34 localhost python3[44567]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:35 localhost python3[44583]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:35 localhost python3[44599]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:36 localhost python3[44647]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:36 localhost python3[44720]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650876.0075736-78134-64890413446517/source _original_basename=tmp3kx3efs6 follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:37 localhost python3[44771]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:37 localhost python3[44817]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:38 localhost python3[44865]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:41 localhost python3[44929]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:42 localhost python3[44974]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650881.3062322-78334-213199587360450/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:42 localhost python3[45005]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:54:42 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 03:54:42 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 03:54:42 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 03:54:42 localhost systemd[1]: sshd.service: Consumed 2.581s CPU time, read 1.9M from disk, written 8.0K to disk. Oct 5 03:54:42 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 03:54:42 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 03:54:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 03:54:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 03:54:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 03:54:42 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 03:54:42 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 03:54:42 localhost sshd[45009]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:54:42 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 03:54:43 localhost python3[45025]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:44 localhost python3[45043]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:44 localhost python3[45061]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:48 localhost python3[45110]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:48 localhost python3[45128]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:49 localhost python3[45158]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:54:50 localhost python3[45208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:50 localhost python3[45226]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:50 localhost python3[45256]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:54:50 localhost systemd[1]: Reloading. Oct 5 03:54:50 localhost systemd-rc-local-generator[45283]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:54:50 localhost systemd-sysv-generator[45286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:54:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:54:51 localhost systemd[1]: Starting chronyd online sources service... Oct 5 03:54:51 localhost chronyc[45295]: 200 OK Oct 5 03:54:51 localhost systemd[1]: chrony-online.service: Deactivated successfully. Oct 5 03:54:51 localhost systemd[1]: Finished chronyd online sources service. Oct 5 03:54:51 localhost python3[45311]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:51 localhost chronyd[25884]: System clock was stepped by -0.000044 seconds Oct 5 03:54:51 localhost python3[45328]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:52 localhost python3[45345]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:52 localhost chronyd[25884]: System clock was stepped by 0.000000 seconds Oct 5 03:54:52 localhost python3[45362]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:53 localhost python3[45379]: ansible-timezone Invoked with name=UTC hwclock=None Oct 5 03:54:53 localhost systemd[1]: Starting Time & Date Service... Oct 5 03:54:53 localhost systemd[1]: Started Time & Date Service. Oct 5 03:54:54 localhost python3[45399]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:54 localhost python3[45416]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:55 localhost python3[45433]: ansible-slurp Invoked with src=/etc/tuned/active_profile Oct 5 03:54:55 localhost python3[45449]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:54:56 localhost python3[45465]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:56 localhost python3[45481]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:57 localhost python3[45529]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:57 localhost python3[45572]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650896.957739-79537-134906160342472/source _original_basename=tmpmirl70n1 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:58 localhost python3[45634]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:58 localhost python3[45677]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650897.862489-79591-229749962963352/source _original_basename=tmpftmir01n follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:59 localhost python3[45707]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 03:54:59 localhost systemd[1]: Reloading. Oct 5 03:54:59 localhost systemd-rc-local-generator[45733]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:54:59 localhost systemd-sysv-generator[45739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:54:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:54:59 localhost python3[45761]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:55:00 localhost python3[45777]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:55:00 localhost python3[45794]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:55:00 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Oct 5 03:55:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:55:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3255 writes, 16K keys, 3255 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3255 writes, 143 syncs, 22.76 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3255 writes, 16K keys, 3255 commit groups, 1.0 writes per commit group, ingest: 14.65 MB, 0.02 MB/s#012Interval WAL: 3255 writes, 143 syncs, 22.76 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 5 03:55:00 localhost python3[45811]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:55:01 localhost python3[45827]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:01 localhost python3[45875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:55:02 localhost python3[45918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650901.3809462-79785-201455556500577/source _original_basename=tmp9fhbmcnl follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:55:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3384 writes, 16K keys, 3384 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3384 writes, 195 syncs, 17.35 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3384 writes, 16K keys, 3384 commit groups, 1.0 writes per commit group, ingest: 15.25 MB, 0.03 MB/s#012Interval WAL: 3384 writes, 195 syncs, 17.35 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 5 03:55:10 localhost systemd[35553]: Created slice User Background Tasks Slice. Oct 5 03:55:10 localhost systemd[35553]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 03:55:10 localhost systemd[35553]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 03:55:23 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 03:55:23 localhost python3[45951]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:23 localhost python3[45967]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Oct 5 03:55:24 localhost python3[45983]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:24 localhost python3[45999]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:24 localhost python3[46015]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:25 localhost python3[46031]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:55:26 localhost kernel: SELinux: Converting 2707 SID table entries... Oct 5 03:55:26 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:55:26 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:55:26 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=13 res=1 Oct 5 03:55:26 localhost python3[46053]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:55:28 localhost python3[46190]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Oct 5 03:55:28 localhost rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Oct 5 03:55:28 localhost python3[46206]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:29 localhost python3[46222]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:29 localhost python3[46238]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Oct 5 03:55:34 localhost python3[46286]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:55:35 localhost python3[46329]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650934.6207623-81195-35018565337956/source _original_basename=tmpa_z79itb follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:35 localhost python3[46359]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:55:37 localhost python3[46482]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:39 localhost python3[46664]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 03:55:41 localhost python3[46695]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:55:42 localhost python3[46712]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:55:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost dbus-broker-launch[18408]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost dbus-broker-launch[18408]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Oct 5 03:55:46 localhost dbus-broker-launch[18408]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Oct 5 03:55:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost systemd[1]: Reexecuting. Oct 5 03:55:46 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 5 03:55:46 localhost systemd[1]: Detected virtualization kvm. Oct 5 03:55:46 localhost systemd[1]: Detected architecture x86-64. Oct 5 03:55:46 localhost systemd-sysv-generator[46769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:55:46 localhost systemd-rc-local-generator[46765]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:55:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:55:55 localhost kernel: SELinux: Converting 2707 SID table entries... Oct 5 03:55:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:55:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:55:55 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=14 res=1 Oct 5 03:55:55 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:57 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:55:57 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:55:57 localhost systemd[1]: Reloading. Oct 5 03:55:57 localhost systemd-sysv-generator[47365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:55:57 localhost systemd-rc-local-generator[47357]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:55:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:55:57 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:55:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:55:57 localhost systemd-journald[619]: Journal stopped Oct 5 03:55:57 localhost systemd[1]: Stopping Journal Service... Oct 5 03:55:57 localhost systemd-journald[619]: Received SIGTERM from PID 1 (systemd). Oct 5 03:55:57 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Oct 5 03:55:57 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Oct 5 03:55:57 localhost systemd[1]: Stopped Journal Service. Oct 5 03:55:57 localhost systemd[1]: systemd-journald.service: Consumed 1.856s CPU time. Oct 5 03:55:57 localhost systemd[1]: Starting Journal Service... Oct 5 03:55:57 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 5 03:55:57 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Oct 5 03:55:57 localhost systemd[1]: systemd-udevd.service: Consumed 3.112s CPU time. Oct 5 03:55:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 5 03:55:57 localhost systemd-journald[47722]: Journal started Oct 5 03:55:57 localhost systemd-journald[47722]: Runtime Journal (/run/log/journal/19f34a97e4e878e70ef0e6e08186acc9) is 12.1M, max 314.7M, 302.6M free. Oct 5 03:55:57 localhost systemd[1]: Started Journal Service. Oct 5 03:55:57 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Oct 5 03:55:57 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 03:55:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:55:57 localhost systemd-udevd[47728]: Using default interface naming scheme 'rhel-9.0'. Oct 5 03:55:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 5 03:55:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:55:57 localhost systemd[1]: Reloading. Oct 5 03:55:57 localhost systemd-rc-local-generator[48333]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:55:57 localhost systemd-sysv-generator[48340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:55:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:55:58 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:55:58 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:55:58 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:55:58 localhost systemd[1]: man-db-cache-update.service: Consumed 1.482s CPU time. Oct 5 03:55:58 localhost systemd[1]: run-r08f74d63db694e9d9a1743106c20375e.service: Deactivated successfully. Oct 5 03:55:58 localhost systemd[1]: run-r94ae1956bee5492b94eae938147cd693.service: Deactivated successfully. Oct 5 03:55:59 localhost python3[48690]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Oct 5 03:56:00 localhost python3[48709]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:56:01 localhost python3[48727]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:01 localhost python3[48727]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Oct 5 03:56:01 localhost python3[48727]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Oct 5 03:56:09 localhost podman[48740]: 2025-10-05 07:56:01.308143259 +0000 UTC m=+0.041002061 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:56:09 localhost python3[48727]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1571c200d626c35388c5864f613dd17fb1618f6192fe622da60a47fa61763c46 --format json Oct 5 03:56:09 localhost python3[48886]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:09 localhost python3[48886]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Oct 5 03:56:09 localhost python3[48886]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Oct 5 03:56:18 localhost podman[48899]: 2025-10-05 07:56:09.903887195 +0000 UTC m=+0.046208336 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 03:56:18 localhost python3[48886]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1e3eee8f9b979ec527f69dda079bc969bf9ddbe65c90f0543f3891d72e56a75e --format json Oct 5 03:56:18 localhost python3[49059]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:18 localhost python3[49059]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Oct 5 03:56:18 localhost python3[49059]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Oct 5 03:56:37 localhost podman[49071]: 2025-10-05 07:56:19.012951773 +0000 UTC m=+0.044124037 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 03:56:37 localhost python3[49059]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a56a2196ea2290002b5e3e60b4c440f2326e4f1173ca4d9c0a320716a756e568 --format json Oct 5 03:56:38 localhost python3[49993]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:38 localhost python3[49993]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Oct 5 03:56:38 localhost python3[49993]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Oct 5 03:56:41 localhost systemd[1]: tmp-crun.YT0W4w.mount: Deactivated successfully. Oct 5 03:56:41 localhost podman[50184]: 2025-10-05 07:56:41.690577877 +0000 UTC m=+0.093922499 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True) Oct 5 03:56:41 localhost podman[50184]: 2025-10-05 07:56:41.791051512 +0000 UTC m=+0.194396224 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, version=7, build-date=2025-09-24T08:57:55, release=553, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 03:56:51 localhost podman[50005]: 2025-10-05 07:56:38.267219806 +0000 UTC m=+0.043650201 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 03:56:51 localhost python3[49993]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 89ed729ad5d881399a0bbd370b8f3c39b84e5a87c6e02b0d1f2c943d2d9cfb7a --format json Oct 5 03:56:52 localhost python3[50365]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:52 localhost python3[50365]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Oct 5 03:56:52 localhost python3[50365]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Oct 5 03:56:59 localhost podman[50377]: 2025-10-05 07:56:52.316268551 +0000 UTC m=+0.043475346 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 03:56:59 localhost python3[50365]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a5e44a6280ab7a1da1b469cc214b40ecdad1d13f0c37c24f32cb45b40cce41d6 --format json Oct 5 03:57:00 localhost python3[50512]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:00 localhost python3[50512]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Oct 5 03:57:00 localhost python3[50512]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Oct 5 03:57:05 localhost podman[50524]: 2025-10-05 07:57:00.347844327 +0000 UTC m=+0.042256863 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 03:57:05 localhost python3[50512]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ef4308e71ba3950618e5de99f6c775558514a06fb9f6d93ca5c54d685a1349a6 --format json Oct 5 03:57:05 localhost python3[50646]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:05 localhost python3[50646]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Oct 5 03:57:05 localhost python3[50646]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Oct 5 03:57:08 localhost podman[50658]: 2025-10-05 07:57:05.875621366 +0000 UTC m=+0.043879707 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 03:57:08 localhost python3[50646]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 5b5e3dbf480a168d795a47e53d0695cd833f381ef10119a3de87e5946f6b53e5 --format json Oct 5 03:57:08 localhost python3[50778]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:08 localhost python3[50778]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Oct 5 03:57:08 localhost python3[50778]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Oct 5 03:57:11 localhost podman[50790]: 2025-10-05 07:57:08.891871678 +0000 UTC m=+0.044067651 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 03:57:11 localhost python3[50778]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 250768c493b95c1151e047902a648e6659ba35adb4c6e0af85c231937d0cc9b7 --format json Oct 5 03:57:11 localhost python3[50912]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:11 localhost python3[50912]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Oct 5 03:57:11 localhost python3[50912]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Oct 5 03:57:15 localhost podman[50925]: 2025-10-05 07:57:11.627075518 +0000 UTC m=+0.048934773 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 5 03:57:15 localhost python3[50912]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 68d3d3a77bfc9fce94ca9ce2b28076450b851f6f1e82e97fbe356ce4ab0f7849 --format json Oct 5 03:57:15 localhost python3[51048]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:15 localhost python3[51048]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Oct 5 03:57:15 localhost python3[51048]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Oct 5 03:57:21 localhost podman[51060]: 2025-10-05 07:57:15.563941642 +0000 UTC m=+0.039881668 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 03:57:21 localhost python3[51048]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 01fc8d861e2b923ef0bf1d5c40a269bd976b00e8a31e8c56d63f3504b82b1c76 --format json Oct 5 03:57:21 localhost python3[51193]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:21 localhost python3[51193]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Oct 5 03:57:21 localhost python3[51193]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Oct 5 03:57:24 localhost podman[51207]: 2025-10-05 07:57:21.613253812 +0000 UTC m=+0.052302434 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 03:57:24 localhost python3[51193]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7f7fcb1a516a6191c7a8cb132a460e04d50ca4381f114f08dcbfe84340e49ac0 --format json Oct 5 03:57:25 localhost python3[51329]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:57:26 localhost ansible-async_wrapper.py[51501]: Invoked with 763048645259 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651046.2902327-84102-184552062011089/AnsiballZ_command.py _ Oct 5 03:57:26 localhost ansible-async_wrapper.py[51504]: Starting module and watcher Oct 5 03:57:26 localhost ansible-async_wrapper.py[51504]: Start watching 51505 (3600) Oct 5 03:57:26 localhost ansible-async_wrapper.py[51505]: Start module (51505) Oct 5 03:57:26 localhost ansible-async_wrapper.py[51501]: Return async_wrapper task started. Oct 5 03:57:27 localhost python3[51523]: ansible-ansible.legacy.async_status Invoked with jid=763048645259.51501 mode=status _async_dir=/tmp/.ansible_async Oct 5 03:57:30 localhost puppet-user[51525]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:30 localhost puppet-user[51525]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:30 localhost puppet-user[51525]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:30 localhost puppet-user[51525]: (file & line not available) Oct 5 03:57:30 localhost puppet-user[51525]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:30 localhost puppet-user[51525]: (file & line not available) Oct 5 03:57:30 localhost puppet-user[51525]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 03:57:30 localhost puppet-user[51525]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 03:57:30 localhost puppet-user[51525]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.13 seconds Oct 5 03:57:30 localhost puppet-user[51525]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Oct 5 03:57:30 localhost puppet-user[51525]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Oct 5 03:57:30 localhost puppet-user[51525]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Oct 5 03:57:30 localhost puppet-user[51525]: Notice: Applied catalog in 0.15 seconds Oct 5 03:57:30 localhost puppet-user[51525]: Application: Oct 5 03:57:30 localhost puppet-user[51525]: Initial environment: production Oct 5 03:57:30 localhost puppet-user[51525]: Converged environment: production Oct 5 03:57:30 localhost puppet-user[51525]: Run mode: user Oct 5 03:57:30 localhost puppet-user[51525]: Changes: Oct 5 03:57:30 localhost puppet-user[51525]: Total: 3 Oct 5 03:57:30 localhost puppet-user[51525]: Events: Oct 5 03:57:30 localhost puppet-user[51525]: Success: 3 Oct 5 03:57:30 localhost puppet-user[51525]: Total: 3 Oct 5 03:57:30 localhost puppet-user[51525]: Resources: Oct 5 03:57:30 localhost puppet-user[51525]: Changed: 3 Oct 5 03:57:30 localhost puppet-user[51525]: Out of sync: 3 Oct 5 03:57:30 localhost puppet-user[51525]: Total: 10 Oct 5 03:57:30 localhost puppet-user[51525]: Time: Oct 5 03:57:30 localhost puppet-user[51525]: Filebucket: 0.00 Oct 5 03:57:30 localhost puppet-user[51525]: Schedule: 0.00 Oct 5 03:57:30 localhost puppet-user[51525]: File: 0.00 Oct 5 03:57:30 localhost puppet-user[51525]: Exec: 0.02 Oct 5 03:57:30 localhost puppet-user[51525]: Augeas: 0.08 Oct 5 03:57:30 localhost puppet-user[51525]: Transaction evaluation: 0.12 Oct 5 03:57:30 localhost puppet-user[51525]: Catalog application: 0.15 Oct 5 03:57:30 localhost puppet-user[51525]: Config retrieval: 0.17 Oct 5 03:57:30 localhost puppet-user[51525]: Last run: 1759651050 Oct 5 03:57:30 localhost puppet-user[51525]: Total: 0.15 Oct 5 03:57:30 localhost puppet-user[51525]: Version: Oct 5 03:57:30 localhost puppet-user[51525]: Config: 1759651050 Oct 5 03:57:30 localhost puppet-user[51525]: Puppet: 7.10.0 Oct 5 03:57:31 localhost ansible-async_wrapper.py[51505]: Module complete (51505) Oct 5 03:57:31 localhost ansible-async_wrapper.py[51504]: Done in kid B. Oct 5 03:57:37 localhost python3[51652]: ansible-ansible.legacy.async_status Invoked with jid=763048645259.51501 mode=status _async_dir=/tmp/.ansible_async Oct 5 03:57:38 localhost python3[51668]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:57:38 localhost python3[51684]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:57:38 localhost python3[51732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:57:39 localhost python3[51775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651058.5770783-84347-46636876673714/source _original_basename=tmpf3mu46c5 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:57:39 localhost python3[51805]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:40 localhost python3[51908]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 03:57:41 localhost python3[51927]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:57:41 localhost python3[51943]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005471150 step=1 update_config_hash_only=False Oct 5 03:57:42 localhost python3[51959]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:42 localhost python3[51975]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 03:57:43 localhost python3[51991]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 03:57:43 localhost python3[52031]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Oct 5 03:57:44 localhost podman[52242]: 2025-10-05 07:57:44.116590069 +0000 UTC m=+0.088278694 container create d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, vcs-type=git, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, batch=17.1_20250721.1) Oct 5 03:57:44 localhost podman[52242]: 2025-10-05 07:57:44.05398271 +0000 UTC m=+0.025671355 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 03:57:44 localhost podman[52279]: 2025-10-05 07:57:44.182212308 +0000 UTC m=+0.107302070 container create 25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:07:52, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team) Oct 5 03:57:44 localhost podman[52276]: 2025-10-05 07:57:44.099817443 +0000 UTC m=+0.032155569 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:57:44 localhost podman[52278]: 2025-10-05 07:57:44.100404988 +0000 UTC m=+0.029882038 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 03:57:44 localhost podman[52279]: 2025-10-05 07:57:44.100555772 +0000 UTC m=+0.025645544 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 03:57:44 localhost podman[52278]: 2025-10-05 07:57:44.242430864 +0000 UTC m=+0.171907944 container create 4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_puppet_step1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T14:56:59, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1) Oct 5 03:57:44 localhost systemd[1]: Started libpod-conmon-d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a.scope. Oct 5 03:57:44 localhost podman[52276]: 2025-10-05 07:57:44.257208877 +0000 UTC m=+0.189546993 container create 45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, container_name=container-puppet-metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9) Oct 5 03:57:44 localhost systemd[1]: Started libcrun container. Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3f104779b88469bb62cf1c18b54d36a2e2abcd59d6c71ce85003d04687536a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost systemd[1]: Started libpod-conmon-45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb.scope. Oct 5 03:57:44 localhost podman[52264]: 2025-10-05 07:57:44.291286356 +0000 UTC m=+0.238764265 container create ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_puppet_step1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 03:57:44 localhost systemd[1]: Started libcrun container. Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d58498c2cdfa113bdd5872f159def074e68966db8dd07b51998d8fd1ed6b97e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost systemd[1]: Started libpod-conmon-ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df.scope. Oct 5 03:57:44 localhost podman[52276]: 2025-10-05 07:57:44.322805606 +0000 UTC m=+0.255143742 container init 45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:07:59, release=1, tcib_managed=true, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 03:57:44 localhost podman[52264]: 2025-10-05 07:57:44.223713304 +0000 UTC m=+0.171191223 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 03:57:44 localhost podman[52276]: 2025-10-05 07:57:44.336562542 +0000 UTC m=+0.268900678 container start 45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1, container_name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:07:59, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 03:57:44 localhost podman[52276]: 2025-10-05 07:57:44.337439006 +0000 UTC m=+0.269777142 container attach 45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=container-puppet-metrics_qdr, version=17.1.9, build-date=2025-07-21T13:07:59, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:57:44 localhost systemd[1]: Started libpod-conmon-4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a.scope. Oct 5 03:57:44 localhost systemd[1]: Started libcrun container. Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7292a8f6b78ce03094e4e5ab015af3e201778e8f85a80ec214d712004c28e98f/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7292a8f6b78ce03094e4e5ab015af3e201778e8f85a80ec214d712004c28e98f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost podman[52264]: 2025-10-05 07:57:44.360494231 +0000 UTC m=+0.307972140 container init ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, release=1, architecture=x86_64, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_puppet_step1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Oct 5 03:57:44 localhost systemd[1]: Started libcrun container. Oct 5 03:57:44 localhost systemd[1]: Started libpod-conmon-25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4.scope. Oct 5 03:57:44 localhost podman[52264]: 2025-10-05 07:57:44.37131907 +0000 UTC m=+0.318796989 container start ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, architecture=x86_64, distribution-scope=public, config_id=tripleo_puppet_step1, tcib_managed=true, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Oct 5 03:57:44 localhost podman[52264]: 2025-10-05 07:57:44.371524825 +0000 UTC m=+0.319002734 container attach ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_puppet_step1, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce3030a8de9705bba7c6ae4b6665c937182eeff0e22e8eedbe214efa8697d3e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost systemd[1]: Started libcrun container. Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06641d9d36b3e454fdc187f20ab1465e19f8342f46236f26d04f5a338e6d8c90/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:45 localhost podman[52242]: 2025-10-05 07:57:45.433639056 +0000 UTC m=+1.405327711 container init d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, distribution-scope=public, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 03:57:45 localhost podman[52242]: 2025-10-05 07:57:45.449164599 +0000 UTC m=+1.420853254 container start d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vcs-type=git, container_name=container-puppet-collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64) Oct 5 03:57:45 localhost podman[52242]: 2025-10-05 07:57:45.450062954 +0000 UTC m=+1.421751639 container attach d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, container_name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12) Oct 5 03:57:45 localhost podman[52278]: 2025-10-05 07:57:45.460355767 +0000 UTC m=+1.389832857 container init 4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=2, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, vcs-type=git, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=container-puppet-nova_libvirt, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt) Oct 5 03:57:45 localhost podman[52278]: 2025-10-05 07:57:45.469851801 +0000 UTC m=+1.399328881 container start 4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=container-puppet-nova_libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, release=2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T14:56:59, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 03:57:45 localhost podman[52278]: 2025-10-05 07:57:45.47020318 +0000 UTC m=+1.399680310 container attach 4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, release=2, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team) Oct 5 03:57:45 localhost podman[52279]: 2025-10-05 07:57:45.489090204 +0000 UTC m=+1.414179996 container init 25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=container-puppet-crond, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 03:57:45 localhost podman[52279]: 2025-10-05 07:57:45.49869027 +0000 UTC m=+1.423780072 container start 25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-crond, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, vcs-type=git) Oct 5 03:57:45 localhost podman[52279]: 2025-10-05 07:57:45.498908785 +0000 UTC m=+1.423998577 container attach 25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-cron, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, container_name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Oct 5 03:57:47 localhost podman[52157]: 2025-10-05 07:57:44.007582404 +0000 UTC m=+0.050620440 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 5 03:57:47 localhost puppet-user[52387]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[52387]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[52387]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[52387]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52407]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[52407]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[52407]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[52407]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52383]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[52383]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[52383]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[52383]: (file & line not available) Oct 5 03:57:47 localhost ovs-vsctl[52793]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Oct 5 03:57:47 localhost puppet-user[52387]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[52387]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52407]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[52407]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52427]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[52427]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[52427]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[52427]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52383]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[52383]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52387]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.10 seconds Oct 5 03:57:47 localhost podman[52757]: 2025-10-05 07:57:47.266682216 +0000 UTC m=+0.094059438 container create d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, build-date=2025-07-21T14:49:23, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, name=rhosp17/openstack-ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-central) Oct 5 03:57:47 localhost puppet-user[52383]: Notice: Accepting previously invalid value for target type 'Integer' Oct 5 03:57:47 localhost puppet-user[52427]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[52427]: (file & line not available) Oct 5 03:57:47 localhost systemd[1]: Started libpod-conmon-d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da.scope. Oct 5 03:57:47 localhost systemd[1]: Started libcrun container. Oct 5 03:57:47 localhost puppet-user[52427]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.08 seconds Oct 5 03:57:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c484c2c33151d186607b10732773e55aa9c5180a8691161d386c5cd0a84bfde/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:47 localhost puppet-user[52383]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.14 seconds Oct 5 03:57:47 localhost podman[52757]: 2025-10-05 07:57:47.318457546 +0000 UTC m=+0.145834768 container init d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, name=rhosp17/openstack-ceilometer-central, io.openshift.expose-services=, build-date=2025-07-21T14:49:23, vendor=Red Hat, Inc., release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-central-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central) Oct 5 03:57:47 localhost podman[52757]: 2025-10-05 07:57:47.328245097 +0000 UTC m=+0.155622309 container start d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, build-date=2025-07-21T14:49:23, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public) Oct 5 03:57:47 localhost podman[52757]: 2025-10-05 07:57:47.328563395 +0000 UTC m=+0.155940617 container attach d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, architecture=x86_64, release=1, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:49:23, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, com.redhat.component=openstack-ceilometer-central-container, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, tcib_managed=true, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f) Oct 5 03:57:47 localhost podman[52757]: 2025-10-05 07:57:47.230147412 +0000 UTC m=+0.057524644 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 5 03:57:47 localhost puppet-user[52417]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[52417]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[52387]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[52417]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52387]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Oct 5 03:57:47 localhost puppet-user[52427]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Oct 5 03:57:47 localhost puppet-user[52427]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Oct 5 03:57:47 localhost puppet-user[52387]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Oct 5 03:57:47 localhost puppet-user[52417]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[52417]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}0c451a0cb7c69d5dfd90d8d56b8a0f926f4242a37450658d52dc8a7242febb0f' Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Oct 5 03:57:47 localhost puppet-user[52383]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Oct 5 03:57:47 localhost puppet-user[52427]: Notice: Applied catalog in 0.09 seconds Oct 5 03:57:47 localhost puppet-user[52427]: Application: Oct 5 03:57:47 localhost puppet-user[52427]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[52427]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[52383]: Notice: Applied catalog in 0.08 seconds Oct 5 03:57:47 localhost puppet-user[52427]: Run mode: user Oct 5 03:57:47 localhost puppet-user[52383]: Application: Oct 5 03:57:47 localhost puppet-user[52383]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[52383]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[52383]: Run mode: user Oct 5 03:57:47 localhost puppet-user[52383]: Changes: Oct 5 03:57:47 localhost puppet-user[52383]: Total: 7 Oct 5 03:57:47 localhost puppet-user[52383]: Events: Oct 5 03:57:47 localhost puppet-user[52383]: Success: 7 Oct 5 03:57:47 localhost puppet-user[52383]: Total: 7 Oct 5 03:57:47 localhost puppet-user[52383]: Resources: Oct 5 03:57:47 localhost puppet-user[52383]: Skipped: 13 Oct 5 03:57:47 localhost puppet-user[52383]: Changed: 5 Oct 5 03:57:47 localhost puppet-user[52383]: Out of sync: 5 Oct 5 03:57:47 localhost puppet-user[52383]: Total: 20 Oct 5 03:57:47 localhost puppet-user[52383]: Time: Oct 5 03:57:47 localhost puppet-user[52383]: File: 0.07 Oct 5 03:57:47 localhost puppet-user[52383]: Transaction evaluation: 0.08 Oct 5 03:57:47 localhost puppet-user[52383]: Catalog application: 0.08 Oct 5 03:57:47 localhost puppet-user[52383]: Config retrieval: 0.17 Oct 5 03:57:47 localhost puppet-user[52383]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[52383]: Total: 0.09 Oct 5 03:57:47 localhost puppet-user[52383]: Version: Oct 5 03:57:47 localhost puppet-user[52383]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[52383]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[52427]: Changes: Oct 5 03:57:47 localhost puppet-user[52427]: Total: 2 Oct 5 03:57:47 localhost puppet-user[52427]: Events: Oct 5 03:57:47 localhost puppet-user[52427]: Success: 2 Oct 5 03:57:47 localhost puppet-user[52427]: Total: 2 Oct 5 03:57:47 localhost puppet-user[52427]: Resources: Oct 5 03:57:47 localhost puppet-user[52427]: Changed: 2 Oct 5 03:57:47 localhost puppet-user[52427]: Out of sync: 2 Oct 5 03:57:47 localhost puppet-user[52427]: Skipped: 7 Oct 5 03:57:47 localhost puppet-user[52427]: Total: 9 Oct 5 03:57:47 localhost puppet-user[52427]: Time: Oct 5 03:57:47 localhost puppet-user[52427]: File: 0.01 Oct 5 03:57:47 localhost puppet-user[52427]: Cron: 0.06 Oct 5 03:57:47 localhost puppet-user[52427]: Transaction evaluation: 0.09 Oct 5 03:57:47 localhost puppet-user[52427]: Catalog application: 0.09 Oct 5 03:57:47 localhost puppet-user[52427]: Config retrieval: 0.11 Oct 5 03:57:47 localhost puppet-user[52427]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[52427]: Total: 0.09 Oct 5 03:57:47 localhost puppet-user[52427]: Version: Oct 5 03:57:47 localhost puppet-user[52427]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[52427]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[52407]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.34 seconds Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Oct 5 03:57:47 localhost puppet-user[52417]: in a future release. Use nova::cinder::os_region_name instead Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Oct 5 03:57:47 localhost puppet-user[52417]: in a future release. Use nova::cinder::catalog_info instead Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Oct 5 03:57:47 localhost systemd[1]: libpod-25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4.scope: Deactivated successfully. Oct 5 03:57:47 localhost systemd[1]: libpod-25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4.scope: Consumed 2.028s CPU time. Oct 5 03:57:47 localhost podman[52279]: 2025-10-05 07:57:47.743609838 +0000 UTC m=+3.668699620 container died 25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, release=1, container_name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 03:57:47 localhost systemd[1]: libpod-45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb.scope: Deactivated successfully. Oct 5 03:57:47 localhost systemd[1]: libpod-45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb.scope: Consumed 2.095s CPU time. Oct 5 03:57:47 localhost podman[52276]: 2025-10-05 07:57:47.75642656 +0000 UTC m=+3.688764666 container died 45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:57:47 localhost puppet-user[52387]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Oct 5 03:57:47 localhost puppet-user[52387]: Notice: Applied catalog in 0.47 seconds Oct 5 03:57:47 localhost puppet-user[52387]: Application: Oct 5 03:57:47 localhost puppet-user[52387]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[52387]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[52387]: Run mode: user Oct 5 03:57:47 localhost puppet-user[52387]: Changes: Oct 5 03:57:47 localhost puppet-user[52387]: Total: 4 Oct 5 03:57:47 localhost puppet-user[52387]: Events: Oct 5 03:57:47 localhost puppet-user[52387]: Success: 4 Oct 5 03:57:47 localhost puppet-user[52387]: Total: 4 Oct 5 03:57:47 localhost puppet-user[52387]: Resources: Oct 5 03:57:47 localhost puppet-user[52387]: Changed: 4 Oct 5 03:57:47 localhost puppet-user[52387]: Out of sync: 4 Oct 5 03:57:47 localhost puppet-user[52387]: Skipped: 8 Oct 5 03:57:47 localhost puppet-user[52387]: Total: 13 Oct 5 03:57:47 localhost puppet-user[52387]: Time: Oct 5 03:57:47 localhost puppet-user[52387]: File: 0.00 Oct 5 03:57:47 localhost puppet-user[52387]: Exec: 0.06 Oct 5 03:57:47 localhost puppet-user[52387]: Config retrieval: 0.14 Oct 5 03:57:47 localhost puppet-user[52387]: Augeas: 0.40 Oct 5 03:57:47 localhost puppet-user[52387]: Transaction evaluation: 0.47 Oct 5 03:57:47 localhost puppet-user[52387]: Catalog application: 0.47 Oct 5 03:57:47 localhost puppet-user[52387]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[52387]: Total: 0.47 Oct 5 03:57:47 localhost puppet-user[52387]: Version: Oct 5 03:57:47 localhost puppet-user[52387]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[52387]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}dee3f10cb1ff461ac3f1e743a5ef3f06993398c6c829895de1dae7f242a64b39' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Oct 5 03:57:47 localhost podman[52975]: 2025-10-05 07:57:47.879457789 +0000 UTC m=+0.115093508 container cleanup 45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, build-date=2025-07-21T13:07:59, distribution-scope=public, container_name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Oct 5 03:57:47 localhost podman[52968]: 2025-10-05 07:57:47.892010914 +0000 UTC m=+0.142032117 container cleanup 25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, config_id=tripleo_puppet_step1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=container-puppet-crond, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 03:57:47 localhost systemd[1]: libpod-conmon-45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb.scope: Deactivated successfully. Oct 5 03:57:47 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Oct 5 03:57:47 localhost systemd[1]: libpod-conmon-25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4.scope: Deactivated successfully. Oct 5 03:57:47 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Oct 5 03:57:47 localhost puppet-user[52407]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52407]: Notice: Applied catalog in 0.37 seconds Oct 5 03:57:47 localhost puppet-user[52407]: Application: Oct 5 03:57:47 localhost puppet-user[52407]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[52407]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[52407]: Run mode: user Oct 5 03:57:47 localhost puppet-user[52407]: Changes: Oct 5 03:57:47 localhost puppet-user[52407]: Total: 43 Oct 5 03:57:47 localhost puppet-user[52407]: Events: Oct 5 03:57:47 localhost puppet-user[52407]: Success: 43 Oct 5 03:57:47 localhost puppet-user[52407]: Total: 43 Oct 5 03:57:47 localhost puppet-user[52407]: Resources: Oct 5 03:57:47 localhost puppet-user[52407]: Skipped: 14 Oct 5 03:57:47 localhost puppet-user[52407]: Changed: 38 Oct 5 03:57:47 localhost puppet-user[52407]: Out of sync: 38 Oct 5 03:57:47 localhost puppet-user[52407]: Total: 82 Oct 5 03:57:47 localhost puppet-user[52407]: Time: Oct 5 03:57:47 localhost puppet-user[52407]: Concat fragment: 0.00 Oct 5 03:57:47 localhost puppet-user[52407]: File: 0.20 Oct 5 03:57:47 localhost puppet-user[52407]: Transaction evaluation: 0.36 Oct 5 03:57:47 localhost puppet-user[52407]: Catalog application: 0.37 Oct 5 03:57:47 localhost puppet-user[52407]: Config retrieval: 0.40 Oct 5 03:57:47 localhost puppet-user[52407]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[52407]: Concat file: 0.00 Oct 5 03:57:47 localhost puppet-user[52407]: Total: 0.37 Oct 5 03:57:47 localhost puppet-user[52407]: Version: Oct 5 03:57:47 localhost puppet-user[52407]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[52407]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[52417]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Oct 5 03:57:48 localhost systemd[1]: libpod-ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df.scope: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: libpod-ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df.scope: Consumed 2.527s CPU time. Oct 5 03:57:48 localhost podman[52264]: 2025-10-05 07:57:48.107262701 +0000 UTC m=+4.054740650 container died ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, container_name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, architecture=x86_64, version=17.1.9) Oct 5 03:57:48 localhost podman[53096]: 2025-10-05 07:57:48.220033218 +0000 UTC m=+0.096003471 container cleanup ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-type=git, config_id=tripleo_puppet_step1, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=container-puppet-iscsid, architecture=x86_64) Oct 5 03:57:48 localhost systemd[1]: libpod-conmon-ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df.scope: Deactivated successfully. Oct 5 03:57:48 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 03:57:48 localhost systemd[1]: tmp-crun.KkYs4j.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-7292a8f6b78ce03094e4e5ab015af3e201778e8f85a80ec214d712004c28e98f-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea90758d9b12d982caa4f07ff35739d0a25178ce2f10907828fde76734ed71df-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-1d58498c2cdfa113bdd5872f159def074e68966db8dd07b51998d8fd1ed6b97e-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45c3d1ee208628552a6b8152988c01fb80994d97788e437dfe867857b32b60fb-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-06641d9d36b3e454fdc187f20ab1465e19f8342f46236f26d04f5a338e6d8c90-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25739853b8fc7fb31c4fd59d70f91fcaaf0f08bdfb7ceebed5e003110ea28ae4-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: libpod-d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a.scope: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: libpod-d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a.scope: Consumed 2.521s CPU time. Oct 5 03:57:48 localhost podman[52242]: 2025-10-05 07:57:48.309991346 +0000 UTC m=+4.281679981 container died d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_id=tripleo_puppet_step1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=container-puppet-collectd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 03:57:48 localhost podman[53180]: 2025-10-05 07:57:48.346043176 +0000 UTC m=+0.084907024 container create e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_id=tripleo_puppet_step1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller) Oct 5 03:57:48 localhost podman[53188]: 2025-10-05 07:57:48.374983758 +0000 UTC m=+0.097083349 container create ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_puppet_step1) Oct 5 03:57:48 localhost podman[53223]: 2025-10-05 07:57:48.405315827 +0000 UTC m=+0.087967937 container cleanup d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, release=2, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-type=git) Oct 5 03:57:48 localhost podman[53180]: 2025-10-05 07:57:48.308021993 +0000 UTC m=+0.046885861 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 03:57:48 localhost systemd[1]: libpod-conmon-d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a.scope: Deactivated successfully. Oct 5 03:57:48 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 03:57:48 localhost podman[53188]: 2025-10-05 07:57:48.324863512 +0000 UTC m=+0.046963133 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 03:57:48 localhost systemd[1]: Started libpod-conmon-e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4.scope. Oct 5 03:57:48 localhost systemd[1]: Started libcrun container. Oct 5 03:57:48 localhost systemd[1]: Started libpod-conmon-ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164.scope. Oct 5 03:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335cbaad9e9dbe5a781de03ec18c97c056a8bddf0cee816826c782168df02dab/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335cbaad9e9dbe5a781de03ec18c97c056a8bddf0cee816826c782168df02dab/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:48 localhost podman[53180]: 2025-10-05 07:57:48.475175009 +0000 UTC m=+0.214038867 container init e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, container_name=container-puppet-ovn_controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Oct 5 03:57:48 localhost systemd[1]: Started libcrun container. Oct 5 03:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535eb0a0eb77dd364c41c5b73a831eb7614980993ab80ff91603f8c8c9ad1931/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:48 localhost podman[53180]: 2025-10-05 07:57:48.496666961 +0000 UTC m=+0.235530809 container start e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, distribution-scope=public) Oct 5 03:57:48 localhost podman[53180]: 2025-10-05 07:57:48.497179185 +0000 UTC m=+0.236043093 container attach e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, io.buildah.version=1.33.12, distribution-scope=public) Oct 5 03:57:48 localhost podman[53188]: 2025-10-05 07:57:48.499601879 +0000 UTC m=+0.221701480 container init ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, architecture=x86_64, distribution-scope=public, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T12:58:40, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-rsyslog, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 03:57:48 localhost podman[53188]: 2025-10-05 07:57:48.507151671 +0000 UTC m=+0.229251262 container start ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 03:57:48 localhost podman[53188]: 2025-10-05 07:57:48.507678655 +0000 UTC m=+0.229778276 container attach ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, container_name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:40, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container) Oct 5 03:57:48 localhost puppet-user[52417]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 1.29 seconds Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}8f8307e6752131cfe7b76229011dc2c20353b7703527f4239dafa25c131174e7' Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Oct 5 03:57:48 localhost puppet-user[52417]: Warning: Empty environment setting 'TLS_PASSWORD' Oct 5 03:57:48 localhost puppet-user[52417]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}18702261db115bd07cacc9444f9a28a0592c863061b61e41d99fc113ec9c38a8' Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Oct 5 03:57:48 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Oct 5 03:57:49 localhost systemd[1]: var-lib-containers-storage-overlay-6d3f104779b88469bb62cf1c18b54d36a2e2abcd59d6c71ce85003d04687536a-merged.mount: Deactivated successfully. Oct 5 03:57:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d70d753e046942a517189cc30b73c336aa2313c7b3bc4e1be1f1e57faf0dd71a-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:49 localhost puppet-user[52915]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:49 localhost puppet-user[52915]: (file & line not available) Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:49 localhost puppet-user[52915]: (file & line not available) Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Oct 5 03:57:49 localhost puppet-user[52915]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.38 seconds Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Oct 5 03:57:49 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Oct 5 03:57:50 localhost puppet-user[52915]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Oct 5 03:57:50 localhost puppet-user[53322]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:50 localhost puppet-user[53322]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:50 localhost puppet-user[53322]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:50 localhost puppet-user[53322]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Oct 5 03:57:50 localhost puppet-user[53322]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:50 localhost puppet-user[53322]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Oct 5 03:57:50 localhost puppet-user[53320]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:50 localhost puppet-user[53320]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:50 localhost puppet-user[53320]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:50 localhost puppet-user[53320]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52915]: Notice: Applied catalog in 0.40 seconds Oct 5 03:57:50 localhost puppet-user[52915]: Application: Oct 5 03:57:50 localhost puppet-user[52915]: Initial environment: production Oct 5 03:57:50 localhost puppet-user[52915]: Converged environment: production Oct 5 03:57:50 localhost puppet-user[52915]: Run mode: user Oct 5 03:57:50 localhost puppet-user[52915]: Changes: Oct 5 03:57:50 localhost puppet-user[52915]: Total: 31 Oct 5 03:57:50 localhost puppet-user[52915]: Events: Oct 5 03:57:50 localhost puppet-user[52915]: Success: 31 Oct 5 03:57:50 localhost puppet-user[52915]: Total: 31 Oct 5 03:57:50 localhost puppet-user[52915]: Resources: Oct 5 03:57:50 localhost puppet-user[52915]: Skipped: 22 Oct 5 03:57:50 localhost puppet-user[52915]: Changed: 31 Oct 5 03:57:50 localhost puppet-user[52915]: Out of sync: 31 Oct 5 03:57:50 localhost puppet-user[52915]: Total: 151 Oct 5 03:57:50 localhost puppet-user[52915]: Time: Oct 5 03:57:50 localhost puppet-user[52915]: Package: 0.01 Oct 5 03:57:50 localhost puppet-user[52915]: Ceilometer config: 0.33 Oct 5 03:57:50 localhost puppet-user[52915]: Transaction evaluation: 0.40 Oct 5 03:57:50 localhost puppet-user[52915]: Catalog application: 0.40 Oct 5 03:57:50 localhost puppet-user[52915]: Config retrieval: 0.45 Oct 5 03:57:50 localhost puppet-user[52915]: Last run: 1759651070 Oct 5 03:57:50 localhost puppet-user[52915]: Resources: 0.00 Oct 5 03:57:50 localhost puppet-user[52915]: Total: 0.40 Oct 5 03:57:50 localhost puppet-user[52915]: Version: Oct 5 03:57:50 localhost puppet-user[52915]: Config: 1759651069 Oct 5 03:57:50 localhost puppet-user[52915]: Puppet: 7.10.0 Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Oct 5 03:57:50 localhost puppet-user[53320]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:50 localhost puppet-user[53320]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Oct 5 03:57:50 localhost puppet-user[53322]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.21 seconds Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Oct 5 03:57:50 localhost puppet-user[53320]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.24 seconds Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}6626454871e6a8692d81b09b17969f804e05d0cbab5d6267f02be7b89a45b6ba' Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Oct 5 03:57:50 localhost puppet-user[53322]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[53322]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Oct 5 03:57:50 localhost puppet-user[53322]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}aab5c972f84348dc00989710de282e31f9099d377a7ace264a7d78fd3ce33239' Oct 5 03:57:50 localhost puppet-user[53322]: Notice: Applied catalog in 0.10 seconds Oct 5 03:57:50 localhost puppet-user[53322]: Application: Oct 5 03:57:50 localhost puppet-user[53322]: Initial environment: production Oct 5 03:57:50 localhost puppet-user[53322]: Converged environment: production Oct 5 03:57:50 localhost puppet-user[53322]: Run mode: user Oct 5 03:57:50 localhost puppet-user[53322]: Changes: Oct 5 03:57:50 localhost puppet-user[53322]: Total: 3 Oct 5 03:57:50 localhost puppet-user[53322]: Events: Oct 5 03:57:50 localhost puppet-user[53322]: Success: 3 Oct 5 03:57:50 localhost puppet-user[53322]: Total: 3 Oct 5 03:57:50 localhost puppet-user[53322]: Resources: Oct 5 03:57:50 localhost puppet-user[53322]: Skipped: 11 Oct 5 03:57:50 localhost puppet-user[53322]: Changed: 3 Oct 5 03:57:50 localhost puppet-user[53322]: Out of sync: 3 Oct 5 03:57:50 localhost puppet-user[53322]: Total: 25 Oct 5 03:57:50 localhost puppet-user[53322]: Time: Oct 5 03:57:50 localhost puppet-user[53322]: Concat file: 0.00 Oct 5 03:57:50 localhost puppet-user[53322]: Concat fragment: 0.00 Oct 5 03:57:50 localhost puppet-user[53322]: File: 0.01 Oct 5 03:57:50 localhost puppet-user[53322]: Transaction evaluation: 0.10 Oct 5 03:57:50 localhost puppet-user[53322]: Catalog application: 0.10 Oct 5 03:57:50 localhost puppet-user[53322]: Config retrieval: 0.26 Oct 5 03:57:50 localhost puppet-user[53322]: Last run: 1759651070 Oct 5 03:57:50 localhost puppet-user[53322]: Total: 0.10 Oct 5 03:57:50 localhost puppet-user[53322]: Version: Oct 5 03:57:50 localhost puppet-user[53322]: Config: 1759651070 Oct 5 03:57:50 localhost puppet-user[53322]: Puppet: 7.10.0 Oct 5 03:57:50 localhost ovs-vsctl[53612]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53614]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Oct 5 03:57:50 localhost systemd[1]: libpod-d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da.scope: Deactivated successfully. Oct 5 03:57:50 localhost systemd[1]: libpod-d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da.scope: Consumed 2.994s CPU time. Oct 5 03:57:50 localhost ovs-vsctl[53627]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.106 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53646]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005471150.localdomain Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005471150.novalocal' to 'np0005471150.localdomain' Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53649]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Oct 5 03:57:50 localhost podman[53635]: 2025-10-05 07:57:50.721482324 +0000 UTC m=+0.053266460 container died d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:49:23, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, vcs-type=git, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-central-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, container_name=container-puppet-ceilometer) Oct 5 03:57:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:50 localhost ovs-vsctl[53655]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Oct 5 03:57:50 localhost systemd[1]: var-lib-containers-storage-overlay-0c484c2c33151d186607b10732773e55aa9c5180a8691161d386c5cd0a84bfde-merged.mount: Deactivated successfully. Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53658]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53673]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Oct 5 03:57:50 localhost podman[53635]: 2025-10-05 07:57:50.81584555 +0000 UTC m=+0.147629696 container cleanup d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:49:23, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, architecture=x86_64, release=1, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Oct 5 03:57:50 localhost systemd[1]: libpod-conmon-d6f97f6ba687d0b1afadb6520130fd4b96e2c89a159c246a1017f9c7876503da.scope: Deactivated successfully. Oct 5 03:57:50 localhost ovs-vsctl[53675]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Oct 5 03:57:50 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53688]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Oct 5 03:57:50 localhost systemd[1]: libpod-ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164.scope: Deactivated successfully. Oct 5 03:57:50 localhost systemd[1]: libpod-ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164.scope: Consumed 2.229s CPU time. Oct 5 03:57:50 localhost podman[53188]: 2025-10-05 07:57:50.86387252 +0000 UTC m=+2.585972121 container died ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., container_name=container-puppet-rsyslog, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, distribution-scope=public, config_id=tripleo_puppet_step1, tcib_managed=true, version=17.1.9, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, name=rhosp17/openstack-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53700]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:dd:a8:07 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53711]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53716]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[53728]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Oct 5 03:57:50 localhost puppet-user[53320]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Oct 5 03:57:50 localhost podman[53699]: 2025-10-05 07:57:50.953132389 +0000 UTC m=+0.081609606 container cleanup ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, build-date=2025-07-21T12:58:40, container_name=container-puppet-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 03:57:50 localhost systemd[1]: libpod-conmon-ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164.scope: Deactivated successfully. Oct 5 03:57:50 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Oct 5 03:57:50 localhost puppet-user[53320]: Notice: Applied catalog in 0.46 seconds Oct 5 03:57:50 localhost puppet-user[53320]: Application: Oct 5 03:57:50 localhost puppet-user[53320]: Initial environment: production Oct 5 03:57:50 localhost puppet-user[53320]: Converged environment: production Oct 5 03:57:50 localhost puppet-user[53320]: Run mode: user Oct 5 03:57:50 localhost puppet-user[53320]: Changes: Oct 5 03:57:50 localhost puppet-user[53320]: Total: 14 Oct 5 03:57:50 localhost puppet-user[53320]: Events: Oct 5 03:57:50 localhost puppet-user[53320]: Success: 14 Oct 5 03:57:50 localhost puppet-user[53320]: Total: 14 Oct 5 03:57:50 localhost puppet-user[53320]: Resources: Oct 5 03:57:50 localhost puppet-user[53320]: Skipped: 12 Oct 5 03:57:50 localhost puppet-user[53320]: Changed: 14 Oct 5 03:57:50 localhost puppet-user[53320]: Out of sync: 14 Oct 5 03:57:50 localhost puppet-user[53320]: Total: 29 Oct 5 03:57:50 localhost puppet-user[53320]: Time: Oct 5 03:57:50 localhost puppet-user[53320]: Exec: 0.02 Oct 5 03:57:50 localhost puppet-user[53320]: Config retrieval: 0.27 Oct 5 03:57:50 localhost puppet-user[53320]: Vs config: 0.39 Oct 5 03:57:50 localhost puppet-user[53320]: Transaction evaluation: 0.45 Oct 5 03:57:50 localhost puppet-user[53320]: Catalog application: 0.46 Oct 5 03:57:50 localhost puppet-user[53320]: Last run: 1759651070 Oct 5 03:57:50 localhost puppet-user[53320]: Total: 0.46 Oct 5 03:57:50 localhost puppet-user[53320]: Version: Oct 5 03:57:50 localhost puppet-user[53320]: Config: 1759651070 Oct 5 03:57:50 localhost puppet-user[53320]: Puppet: 7.10.0 Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:51 localhost systemd[1]: libpod-e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4.scope: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: libpod-e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4.scope: Consumed 2.702s CPU time. Oct 5 03:57:51 localhost podman[53180]: 2025-10-05 07:57:51.332888172 +0000 UTC m=+3.071752020 container died e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 03:57:51 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Oct 5 03:57:51 localhost systemd[1]: var-lib-containers-storage-overlay-535eb0a0eb77dd364c41c5b73a831eb7614980993ab80ff91603f8c8c9ad1931-merged.mount: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea23f3ece5510cb27bf4887004c43c9905ec5f0d55fb98821b8cc719dd7c6164-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: var-lib-containers-storage-overlay-335cbaad9e9dbe5a781de03ec18c97c056a8bddf0cee816826c782168df02dab-merged.mount: Deactivated successfully. Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Oct 5 03:57:52 localhost podman[53805]: 2025-10-05 07:57:52.678829958 +0000 UTC m=+1.339688460 container cleanup e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, container_name=container-puppet-ovn_controller, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 03:57:52 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 03:57:52 localhost systemd[1]: libpod-conmon-e8a8c96b4c0a8949d90c958abdd9f2bea41e7c3df332698a07ffff3d27154ea4.scope: Deactivated successfully. Oct 5 03:57:52 localhost podman[53261]: 2025-10-05 07:57:48.48236265 +0000 UTC m=+0.036892134 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Oct 5 03:57:52 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Oct 5 03:57:52 localhost podman[53860]: 2025-10-05 07:57:52.906015024 +0000 UTC m=+0.068668531 container create 1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, version=17.1.9, build-date=2025-07-21T15:44:03, release=1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:57:52 localhost systemd[1]: Started libpod-conmon-1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440.scope. Oct 5 03:57:52 localhost systemd[1]: Started libcrun container. Oct 5 03:57:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b931bce6c09d5c1ab814a1cfb6889f50848a6a97faf718226aea19d8eaed8547/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:52 localhost podman[53860]: 2025-10-05 07:57:52.969124986 +0000 UTC m=+0.131778463 container init 1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, container_name=container-puppet-neutron, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-server, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, vcs-type=git) Oct 5 03:57:52 localhost podman[53860]: 2025-10-05 07:57:52.871042952 +0000 UTC m=+0.033696489 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 5 03:57:52 localhost podman[53860]: 2025-10-05 07:57:52.977265373 +0000 UTC m=+0.139918850 container start 1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, container_name=container-puppet-neutron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 03:57:52 localhost podman[53860]: 2025-10-05 07:57:52.977552851 +0000 UTC m=+0.140206328 container attach 1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, name=rhosp17/openstack-neutron-server, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, release=1) Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Oct 5 03:57:53 localhost puppet-user[52417]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3ccd56cc76ec60fa08fd698d282c9c89b1e8c485a00f47d57569ed8f6f8a16e4' Oct 5 03:57:53 localhost puppet-user[52417]: Notice: Applied catalog in 4.43 seconds Oct 5 03:57:53 localhost puppet-user[52417]: Application: Oct 5 03:57:53 localhost puppet-user[52417]: Initial environment: production Oct 5 03:57:53 localhost puppet-user[52417]: Converged environment: production Oct 5 03:57:53 localhost puppet-user[52417]: Run mode: user Oct 5 03:57:53 localhost puppet-user[52417]: Changes: Oct 5 03:57:53 localhost puppet-user[52417]: Total: 183 Oct 5 03:57:53 localhost puppet-user[52417]: Events: Oct 5 03:57:53 localhost puppet-user[52417]: Success: 183 Oct 5 03:57:53 localhost puppet-user[52417]: Total: 183 Oct 5 03:57:53 localhost puppet-user[52417]: Resources: Oct 5 03:57:53 localhost puppet-user[52417]: Changed: 183 Oct 5 03:57:53 localhost puppet-user[52417]: Out of sync: 183 Oct 5 03:57:53 localhost puppet-user[52417]: Skipped: 57 Oct 5 03:57:53 localhost puppet-user[52417]: Total: 487 Oct 5 03:57:53 localhost puppet-user[52417]: Time: Oct 5 03:57:53 localhost puppet-user[52417]: Concat fragment: 0.00 Oct 5 03:57:53 localhost puppet-user[52417]: Anchor: 0.00 Oct 5 03:57:53 localhost puppet-user[52417]: File line: 0.00 Oct 5 03:57:53 localhost puppet-user[52417]: Virtlogd config: 0.00 Oct 5 03:57:53 localhost puppet-user[52417]: Virtstoraged config: 0.01 Oct 5 03:57:53 localhost puppet-user[52417]: Exec: 0.01 Oct 5 03:57:53 localhost puppet-user[52417]: Virtsecretd config: 0.01 Oct 5 03:57:53 localhost puppet-user[52417]: Virtnodedevd config: 0.01 Oct 5 03:57:53 localhost puppet-user[52417]: Virtqemud config: 0.02 Oct 5 03:57:53 localhost puppet-user[52417]: Package: 0.02 Oct 5 03:57:53 localhost puppet-user[52417]: File: 0.03 Oct 5 03:57:53 localhost puppet-user[52417]: Virtproxyd config: 0.03 Oct 5 03:57:53 localhost puppet-user[52417]: Augeas: 1.19 Oct 5 03:57:53 localhost puppet-user[52417]: Config retrieval: 1.52 Oct 5 03:57:53 localhost puppet-user[52417]: Last run: 1759651073 Oct 5 03:57:53 localhost puppet-user[52417]: Nova config: 2.90 Oct 5 03:57:53 localhost puppet-user[52417]: Transaction evaluation: 4.42 Oct 5 03:57:53 localhost puppet-user[52417]: Catalog application: 4.43 Oct 5 03:57:53 localhost puppet-user[52417]: Resources: 0.00 Oct 5 03:57:53 localhost puppet-user[52417]: Concat file: 0.00 Oct 5 03:57:53 localhost puppet-user[52417]: Total: 4.43 Oct 5 03:57:53 localhost puppet-user[52417]: Version: Oct 5 03:57:53 localhost puppet-user[52417]: Config: 1759651067 Oct 5 03:57:53 localhost puppet-user[52417]: Puppet: 7.10.0 Oct 5 03:57:54 localhost systemd[1]: libpod-4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a.scope: Deactivated successfully. Oct 5 03:57:54 localhost systemd[1]: libpod-4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a.scope: Consumed 8.342s CPU time. Oct 5 03:57:54 localhost podman[53933]: 2025-10-05 07:57:54.229846411 +0000 UTC m=+0.036802011 container died 4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, version=17.1.9, config_id=tripleo_puppet_step1, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, container_name=container-puppet-nova_libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 03:57:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:54 localhost systemd[1]: var-lib-containers-storage-overlay-7ce3030a8de9705bba7c6ae4b6665c937182eeff0e22e8eedbe214efa8697d3e-merged.mount: Deactivated successfully. Oct 5 03:57:54 localhost podman[53933]: 2025-10-05 07:57:54.346357076 +0000 UTC m=+0.153312676 container cleanup 4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., tcib_managed=true, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 5 03:57:54 localhost systemd[1]: libpod-conmon-4bbd03aa91b6a4fbe212d2dce8c268447521a2cf74dc8736e41ec738253cb41a.scope: Deactivated successfully. Oct 5 03:57:54 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 03:57:54 localhost puppet-user[53893]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Oct 5 03:57:54 localhost puppet-user[53893]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:54 localhost puppet-user[53893]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:54 localhost puppet-user[53893]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:54 localhost puppet-user[53893]: (file & line not available) Oct 5 03:57:54 localhost puppet-user[53893]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:54 localhost puppet-user[53893]: (file & line not available) Oct 5 03:57:54 localhost puppet-user[53893]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Oct 5 03:57:55 localhost puppet-user[53893]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.61 seconds Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Oct 5 03:57:55 localhost puppet-user[53893]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Oct 5 03:57:56 localhost puppet-user[53893]: Notice: Applied catalog in 0.43 seconds Oct 5 03:57:56 localhost puppet-user[53893]: Application: Oct 5 03:57:56 localhost puppet-user[53893]: Initial environment: production Oct 5 03:57:56 localhost puppet-user[53893]: Converged environment: production Oct 5 03:57:56 localhost puppet-user[53893]: Run mode: user Oct 5 03:57:56 localhost puppet-user[53893]: Changes: Oct 5 03:57:56 localhost puppet-user[53893]: Total: 33 Oct 5 03:57:56 localhost puppet-user[53893]: Events: Oct 5 03:57:56 localhost puppet-user[53893]: Success: 33 Oct 5 03:57:56 localhost puppet-user[53893]: Total: 33 Oct 5 03:57:56 localhost puppet-user[53893]: Resources: Oct 5 03:57:56 localhost puppet-user[53893]: Skipped: 21 Oct 5 03:57:56 localhost puppet-user[53893]: Changed: 33 Oct 5 03:57:56 localhost puppet-user[53893]: Out of sync: 33 Oct 5 03:57:56 localhost puppet-user[53893]: Total: 155 Oct 5 03:57:56 localhost puppet-user[53893]: Time: Oct 5 03:57:56 localhost puppet-user[53893]: Resources: 0.00 Oct 5 03:57:56 localhost puppet-user[53893]: Ovn metadata agent config: 0.01 Oct 5 03:57:56 localhost puppet-user[53893]: Neutron config: 0.37 Oct 5 03:57:56 localhost puppet-user[53893]: Transaction evaluation: 0.42 Oct 5 03:57:56 localhost puppet-user[53893]: Catalog application: 0.43 Oct 5 03:57:56 localhost puppet-user[53893]: Config retrieval: 0.68 Oct 5 03:57:56 localhost puppet-user[53893]: Last run: 1759651076 Oct 5 03:57:56 localhost puppet-user[53893]: Total: 0.43 Oct 5 03:57:56 localhost puppet-user[53893]: Version: Oct 5 03:57:56 localhost puppet-user[53893]: Config: 1759651074 Oct 5 03:57:56 localhost puppet-user[53893]: Puppet: 7.10.0 Oct 5 03:57:56 localhost systemd[1]: libpod-1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440.scope: Deactivated successfully. Oct 5 03:57:56 localhost systemd[1]: libpod-1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440.scope: Consumed 3.530s CPU time. Oct 5 03:57:56 localhost podman[53860]: 2025-10-05 07:57:56.533527826 +0000 UTC m=+3.696181363 container died 1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-server-container, release=1, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 03:57:56 localhost systemd[1]: tmp-crun.3nEX4r.mount: Deactivated successfully. Oct 5 03:57:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:56 localhost systemd[1]: var-lib-containers-storage-overlay-b931bce6c09d5c1ab814a1cfb6889f50848a6a97faf718226aea19d8eaed8547-merged.mount: Deactivated successfully. Oct 5 03:57:56 localhost podman[54074]: 2025-10-05 07:57:56.671532565 +0000 UTC m=+0.129283797 container cleanup 1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, name=rhosp17/openstack-neutron-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, config_id=tripleo_puppet_step1, container_name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 03:57:56 localhost systemd[1]: libpod-conmon-1aa3de42e7306889cd02a030b0433e84c03790825f459891fe090befbd178440.scope: Deactivated successfully. Oct 5 03:57:56 localhost python3[52031]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471150 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471150', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 5 03:57:57 localhost python3[54127]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:58 localhost python3[54159]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:57:58 localhost python3[54209]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:57:59 localhost python3[54252]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651078.6780264-84911-6031041248242/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:59 localhost python3[54314]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:00 localhost python3[54357]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651079.5631552-84911-258085159864916/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:00 localhost python3[54419]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:01 localhost python3[54462]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651080.534031-85012-185130028186517/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:01 localhost python3[54524]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:02 localhost python3[54567]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651081.4532022-85118-225975400040570/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:02 localhost python3[54597]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:58:02 localhost systemd[1]: Reloading. Oct 5 03:58:02 localhost systemd-rc-local-generator[54620]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:02 localhost systemd-sysv-generator[54626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:02 localhost systemd[1]: Reloading. Oct 5 03:58:03 localhost systemd-rc-local-generator[54657]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:03 localhost systemd-sysv-generator[54661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:03 localhost systemd[1]: Starting TripleO Container Shutdown... Oct 5 03:58:03 localhost systemd[1]: Finished TripleO Container Shutdown. Oct 5 03:58:03 localhost python3[54720]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:04 localhost python3[54763]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651083.440089-85165-3565809073035/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:04 localhost python3[54825]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:05 localhost python3[54868]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651084.4021068-85194-50978292581841/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:05 localhost python3[54898]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:58:05 localhost systemd[1]: Reloading. Oct 5 03:58:05 localhost systemd-rc-local-generator[54922]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:05 localhost systemd-sysv-generator[54926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:06 localhost systemd[1]: Reloading. Oct 5 03:58:06 localhost systemd-sysv-generator[54966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:06 localhost systemd-rc-local-generator[54959]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:06 localhost systemd[1]: Starting Create netns directory... Oct 5 03:58:06 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 03:58:06 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 03:58:06 localhost systemd[1]: Finished Create netns directory. Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 1d5d37a9592a9b8a8e98d61f24e93486 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: da9a0dc7b40588672419e3ce10063e21 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: bfafc2f71ef1d8535e7a88ec76ac5234 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: d724ad8b89331350c29ab6a1bdffd03b Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: b9a01754dad058662a16b1bcdedd274e Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: b9a01754dad058662a16b1bcdedd274e Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 9e8d2afb999998c163aa5ea4d40dbbed Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67 Oct 5 03:58:06 localhost python3[54991]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 012327e9705c184cfee14ca411150d67 Oct 5 03:58:08 localhost python3[55049]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 03:58:08 localhost podman[55088]: 2025-10-05 07:58:08.634964076 +0000 UTC m=+0.086151618 container create e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, release=1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 03:58:08 localhost systemd[1]: Started libpod-conmon-e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209.scope. Oct 5 03:58:08 localhost systemd[1]: Started libcrun container. Oct 5 03:58:08 localhost podman[55088]: 2025-10-05 07:58:08.593799208 +0000 UTC m=+0.044986780 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:58:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b095dbb564d7abe4f61e98f82d00c92c39a4be15a933d5ebc8f09cab02993394/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 5 03:58:08 localhost podman[55088]: 2025-10-05 07:58:08.704659943 +0000 UTC m=+0.155847495 container init e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, container_name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 03:58:08 localhost podman[55088]: 2025-10-05 07:58:08.714556137 +0000 UTC m=+0.165743679 container start e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=metrics_qdr_init_logs) Oct 5 03:58:08 localhost podman[55088]: 2025-10-05 07:58:08.714960048 +0000 UTC m=+0.166147640 container attach e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, container_name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:58:08 localhost systemd[1]: libpod-e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209.scope: Deactivated successfully. Oct 5 03:58:08 localhost podman[55088]: 2025-10-05 07:58:08.722831717 +0000 UTC m=+0.174019269 container died e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_id=tripleo_step1, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, build-date=2025-07-21T13:07:59, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=metrics_qdr_init_logs, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, vcs-type=git) Oct 5 03:58:08 localhost podman[55107]: 2025-10-05 07:58:08.796684537 +0000 UTC m=+0.062776165 container cleanup e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, vendor=Red Hat, Inc., distribution-scope=public, container_name=metrics_qdr_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59) Oct 5 03:58:08 localhost systemd[1]: libpod-conmon-e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209.scope: Deactivated successfully. Oct 5 03:58:08 localhost python3[55049]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Oct 5 03:58:09 localhost podman[55177]: 2025-10-05 07:58:09.216947739 +0000 UTC m=+0.070786009 container create 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:58:09 localhost systemd[1]: Started libpod-conmon-951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.scope. Oct 5 03:58:09 localhost systemd[1]: Started libcrun container. Oct 5 03:58:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e34842e7b4c004f2994d43372776e245b87a7ff67070b72cbc86e95e6c79b83/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 5 03:58:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e34842e7b4c004f2994d43372776e245b87a7ff67070b72cbc86e95e6c79b83/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 5 03:58:09 localhost podman[55177]: 2025-10-05 07:58:09.179979174 +0000 UTC m=+0.033817484 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 03:58:09 localhost podman[55177]: 2025-10-05 07:58:09.303230338 +0000 UTC m=+0.157068698 container init 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr) Oct 5 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 03:58:09 localhost podman[55177]: 2025-10-05 07:58:09.335134449 +0000 UTC m=+0.188972729 container start 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T13:07:59) Oct 5 03:58:09 localhost python3[55049]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1d5d37a9592a9b8a8e98d61f24e93486 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:58:09 localhost podman[55199]: 2025-10-05 07:58:09.479124937 +0000 UTC m=+0.134778004 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, release=1, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:58:09 localhost systemd[1]: var-lib-containers-storage-overlay-b095dbb564d7abe4f61e98f82d00c92c39a4be15a933d5ebc8f09cab02993394-merged.mount: Deactivated successfully. Oct 5 03:58:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2a74c2388bafafcb4da4a2e9d5c912c144485980791d0b05f7d9c63de3d1209-userdata-shm.mount: Deactivated successfully. Oct 5 03:58:09 localhost podman[55199]: 2025-10-05 07:58:09.704441113 +0000 UTC m=+0.360094220 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Oct 5 03:58:09 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 03:58:09 localhost python3[55274]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:10 localhost python3[55290]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:58:10 localhost python3[55351]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651090.1977394-85311-106805956451304/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:11 localhost python3[55367]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 03:58:11 localhost systemd[1]: Reloading. Oct 5 03:58:11 localhost systemd-rc-local-generator[55391]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:11 localhost systemd-sysv-generator[55395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:11 localhost python3[55418]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:58:12 localhost systemd[1]: Reloading. Oct 5 03:58:12 localhost systemd-rc-local-generator[55443]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:12 localhost systemd-sysv-generator[55446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:12 localhost systemd[1]: Starting metrics_qdr container... Oct 5 03:58:12 localhost systemd[1]: Started metrics_qdr container. Oct 5 03:58:12 localhost python3[55500]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:14 localhost python3[55621]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005471150 step=1 update_config_hash_only=False Oct 5 03:58:14 localhost python3[55637]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:15 localhost python3[55653]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 03:58:30 localhost sshd[55654]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 03:58:40 localhost podman[55656]: 2025-10-05 07:58:40.672672759 +0000 UTC m=+0.083812643 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 03:58:40 localhost podman[55656]: 2025-10-05 07:58:40.891977024 +0000 UTC m=+0.303116958 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1) Oct 5 03:58:40 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 03:59:11 localhost systemd[1]: tmp-crun.Tlap6F.mount: Deactivated successfully. Oct 5 03:59:11 localhost podman[55764]: 2025-10-05 07:59:11.66583038 +0000 UTC m=+0.070776939 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Oct 5 03:59:11 localhost podman[55764]: 2025-10-05 07:59:11.81860783 +0000 UTC m=+0.223554419 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:59:11 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 03:59:42 localhost systemd[1]: tmp-crun.l3qLPC.mount: Deactivated successfully. Oct 5 03:59:42 localhost podman[55794]: 2025-10-05 07:59:42.693101814 +0000 UTC m=+0.100295019 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12) Oct 5 03:59:42 localhost podman[55794]: 2025-10-05 07:59:42.920806556 +0000 UTC m=+0.327999781 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 5 03:59:42 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:00:13 localhost systemd[1]: tmp-crun.qAdDet.mount: Deactivated successfully. Oct 5 04:00:13 localhost podman[55900]: 2025-10-05 08:00:13.686862211 +0000 UTC m=+0.096593778 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:00:13 localhost podman[55900]: 2025-10-05 08:00:13.883153171 +0000 UTC m=+0.292884778 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:59, distribution-scope=public) Oct 5 04:00:13 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:00:44 localhost podman[55931]: 2025-10-05 08:00:44.675269361 +0000 UTC m=+0.086909767 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:00:44 localhost podman[55931]: 2025-10-05 08:00:44.915870492 +0000 UTC m=+0.327510868 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, config_id=tripleo_step1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:00:44 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:01:15 localhost podman[56048]: 2025-10-05 08:01:15.666849208 +0000 UTC m=+0.074264916 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step1, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd) Oct 5 04:01:15 localhost podman[56048]: 2025-10-05 08:01:15.849566653 +0000 UTC m=+0.256982351 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_id=tripleo_step1) Oct 5 04:01:15 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:01:46 localhost systemd[1]: tmp-crun.gOGAaa.mount: Deactivated successfully. Oct 5 04:01:46 localhost podman[56079]: 2025-10-05 08:01:46.681776212 +0000 UTC m=+0.089961390 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step1, distribution-scope=public, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 5 04:01:46 localhost podman[56079]: 2025-10-05 08:01:46.873933552 +0000 UTC m=+0.282118730 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:07:59, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:01:46 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:02:17 localhost podman[56186]: 2025-10-05 08:02:17.665350275 +0000 UTC m=+0.076074326 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:02:17 localhost podman[56186]: 2025-10-05 08:02:17.84214542 +0000 UTC m=+0.252869531 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.9, release=1, io.buildah.version=1.33.12, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:02:17 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:02:48 localhost podman[56215]: 2025-10-05 08:02:48.671108496 +0000 UTC m=+0.082136029 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team) Oct 5 04:02:48 localhost podman[56215]: 2025-10-05 08:02:48.873070381 +0000 UTC m=+0.284097844 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:02:48 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:03:19 localhost podman[56318]: 2025-10-05 08:03:19.683157729 +0000 UTC m=+0.085957705 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_id=tripleo_step1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:03:19 localhost podman[56318]: 2025-10-05 08:03:19.880799302 +0000 UTC m=+0.283599228 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:03:19 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:03:50 localhost systemd[1]: tmp-crun.5Vz1WW.mount: Deactivated successfully. Oct 5 04:03:50 localhost podman[56348]: 2025-10-05 08:03:50.680411633 +0000 UTC m=+0.091168010 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Oct 5 04:03:50 localhost podman[56348]: 2025-10-05 08:03:50.910011257 +0000 UTC m=+0.320767594 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:03:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:03:53 localhost ceph-osd[31409]: osd.1 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2,3,1] r=2 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:03:55 localhost ceph-osd[31409]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,2,1] r=2 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:03:56 localhost ceph-osd[32364]: osd.4 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [4,5,3] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:03:58 localhost ceph-osd[32364]: osd.4 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [4,5,3] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:03:59 localhost ceph-osd[32364]: osd.4 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [3,4,2] r=1 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:14 localhost ceph-osd[32364]: osd.4 pg_epoch: 30 pg[6.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [4,5,3] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:16 localhost ceph-osd[32364]: osd.4 pg_epoch: 31 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [4,5,3] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:17 localhost ceph-osd[32364]: osd.4 pg_epoch: 31 pg[7.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [5,0,4] r=2 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:04:21 localhost podman[56485]: 2025-10-05 08:04:21.522789903 +0000 UTC m=+0.081837600 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, release=1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:04:21 localhost podman[56485]: 2025-10-05 08:04:21.723887723 +0000 UTC m=+0.282935310 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:04:21 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:04:29 localhost ceph-osd[31409]: osd.1 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=12.330595970s) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active pruub 1181.213012695s@ mbc={}] start_peering_interval up [2,3,1] -> [2,3,1], acting [2,3,1] -> [2,3,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:29 localhost ceph-osd[31409]: osd.1 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=12.328557014s) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1181.213012695s@ mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.8( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.7( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.2( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.3( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.11( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.14( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.16( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.17( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[31409]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=2 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:31 localhost ceph-osd[31409]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=12.568449020s) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 1183.510009766s@ mbc={}] start_peering_interval up [3,2,1] -> [3,2,1], acting [3,2,1] -> [3,2,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:31 localhost ceph-osd[32364]: osd.4 pg_epoch: 37 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=14.554518700s) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 1180.950073242s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:31 localhost ceph-osd[31409]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=12.565796852s) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1183.510009766s@ mbc={}] state: transitioning to Stray Oct 5 04:04:31 localhost ceph-osd[32364]: osd.4 pg_epoch: 37 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=14.554518700s) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1180.950073242s@ mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[31409]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=2 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.19( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.8( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.7( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.6( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.3( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.4( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.2( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.5( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.10( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.11( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.14( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.13( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.12( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.16( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.17( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.18( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.15( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.9( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: osd.4 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts Oct 5 04:04:32 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok Oct 5 04:04:33 localhost ceph-osd[32364]: osd.4 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.358526230s) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 1182.774536133s@ mbc={}] start_peering_interval up [3,4,2] -> [3,4,2], acting [3,4,2] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:33 localhost ceph-osd[32364]: osd.4 pg_epoch: 39 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=39 pruub=14.827980042s) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active pruub 1183.244140625s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:33 localhost ceph-osd[32364]: osd.4 pg_epoch: 39 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=39 pruub=14.827980042s) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1183.244140625s@ mbc={}] state: transitioning to Primary Oct 5 04:04:33 localhost ceph-osd[32364]: osd.4 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.354993820s) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1182.774536133s@ mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1b( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.6( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.b( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.3( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.9( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.2( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.e( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.f( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.c( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.12( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.d( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.13( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.10( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.11( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.16( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.17( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1a( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.14( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.15( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.8( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.7( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.4( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.a( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.5( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.18( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.19( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1e( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1c( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[5.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=1 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1d( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1f( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.0( empty local-lis/les=39/40 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.18( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.10( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.13( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.11( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.14( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32364]: osd.4 pg_epoch: 40 pg[6.16( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=0 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:35 localhost sshd[56529]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:04:35 localhost ceph-osd[32364]: osd.4 pg_epoch: 41 pg[7.0( v 33'39 (0'0,33'39] local-lis/les=31/32 n=22 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=13.796941757s) [5,0,4] r=2 lpr=41 pi=[31,41)/1 luod=0'0 lua=33'37 crt=33'39 lcod 33'38 mlcod 0'0 active pruub 1184.279663086s@ mbc={}] start_peering_interval up [5,0,4] -> [5,0,4], acting [5,0,4] -> [5,0,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:35 localhost ceph-osd[32364]: osd.4 pg_epoch: 41 pg[7.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=13.795331955s) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 lcod 33'38 mlcod 0'0 unknown NOTIFY pruub 1184.279663086s@ mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.1( v 33'39 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32364]: osd.4 pg_epoch: 42 pg[7.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:39 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.17 scrub starts Oct 5 04:04:39 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.17 scrub ok Oct 5 04:04:40 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.14 scrub starts Oct 5 04:04:40 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.14 scrub ok Oct 5 04:04:42 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.13 scrub starts Oct 5 04:04:42 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.13 scrub ok Oct 5 04:04:43 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.15 scrub starts Oct 5 04:04:43 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.15 scrub ok Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.082382202s) [3,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937255859s@ mbc={}] start_peering_interval up [2,3,1] -> [3,2,4], acting [2,3,1] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122961998s) [0,5,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [0,5,1], acting [3,2,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122904778s) [0,1,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [0,1,2], acting [3,2,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.082049370s) [4,2,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937133789s@ mbc={}] start_peering_interval up [2,3,1] -> [4,2,3], acting [2,3,1] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.081986427s) [4,2,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937133789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122778893s) [0,1,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122643471s) [0,5,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.081851006s) [3,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937255859s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127554893s) [4,0,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.983032227s@ mbc={}] start_peering_interval up [3,2,1] -> [4,0,5], acting [3,2,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.080334663s) [3,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.935791016s@ mbc={}] start_peering_interval up [2,3,1] -> [3,2,4], acting [2,3,1] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127470016s) [4,0,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.983032227s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122963905s) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978515625s@ mbc={}] start_peering_interval up [3,2,1] -> [1,2,3], acting [3,2,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079955101s) [5,3,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.935546875s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,4], acting [2,3,1] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122963905s) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1195.978515625s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.080154419s) [3,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.935791016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127210617s) [2,1,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.983032227s@ mbc={}] start_peering_interval up [3,2,1] -> [2,1,3], acting [3,2,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127171516s) [2,1,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.983032227s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078804970s) [5,3,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.934570312s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,1], acting [2,3,1] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078770638s) [5,3,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.934570312s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122295380s) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978271484s@ mbc={}] start_peering_interval up [3,2,1] -> [1,5,0], acting [3,2,1] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122295380s) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1195.978271484s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079914093s) [5,3,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.935546875s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079456329s) [0,1,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.935546875s@ mbc={}] start_peering_interval up [2,3,1] -> [0,1,2], acting [2,3,1] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.081085205s) [2,0,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937255859s@ mbc={}] start_peering_interval up [2,3,1] -> [2,0,1], acting [2,3,1] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079425812s) [0,1,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.935546875s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.081059456s) [2,0,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937255859s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.126646042s) [0,2,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.983032227s@ mbc={}] start_peering_interval up [3,2,1] -> [0,2,4], acting [3,2,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.126618385s) [0,2,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.983032227s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121772766s) [2,3,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978149414s@ mbc={}] start_peering_interval up [3,2,1] -> [2,3,4], acting [3,2,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121734619s) [2,3,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978149414s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121709824s) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978271484s@ mbc={}] start_peering_interval up [3,2,1] -> [1,5,0], acting [3,2,1] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121709824s) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1195.978271484s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079761505s) [3,1,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936401367s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,2], acting [2,3,1] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079732895s) [3,1,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936401367s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121916771s) [2,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978637695s@ mbc={}] start_peering_interval up [3,2,1] -> [2,4,3], acting [3,2,1] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121891022s) [2,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978637695s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120968819s) [5,0,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [5,0,4], acting [3,2,1] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079470634s) [3,1,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936401367s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,5], acting [2,3,1] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120894432s) [5,0,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079443932s) [3,1,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936401367s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120991707s) [3,5,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978149414s@ mbc={}] start_peering_interval up [3,2,1] -> [3,5,4], acting [3,2,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079572678s) [0,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936645508s@ mbc={}] start_peering_interval up [2,3,1] -> [0,2,4], acting [2,3,1] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079548836s) [0,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936645508s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120955467s) [3,5,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978149414s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079854965s) [5,1,3] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937133789s@ mbc={}] start_peering_interval up [2,3,1] -> [5,1,3], acting [2,3,1] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079037666s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936401367s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,4], acting [2,3,1] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079266548s) [4,3,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936645508s@ mbc={}] start_peering_interval up [2,3,1] -> [4,3,2], acting [2,3,1] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120577812s) [0,5,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978027344s@ mbc={}] start_peering_interval up [3,2,1] -> [0,5,1], acting [3,2,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078998566s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936401367s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079235077s) [4,3,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936645508s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120546341s) [0,5,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978027344s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120528221s) [5,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978271484s@ mbc={}] start_peering_interval up [3,2,1] -> [5,4,3], acting [3,2,1] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078758240s) [0,5,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936401367s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,1], acting [2,3,1] -> [0,5,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120500565s) [5,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978271484s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078718185s) [0,5,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936401367s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078807831s) [3,1,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936645508s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,5], acting [2,3,1] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119902611s) [5,3,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [5,3,4], acting [3,2,1] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078749657s) [3,1,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936645508s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119874001s) [5,3,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.079774857s) [5,1,3] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937133789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.124640465s) [4,2,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.982666016s@ mbc={}] start_peering_interval up [3,2,1] -> [4,2,3], acting [3,2,1] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.124611855s) [4,2,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.982666016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078681946s) [4,3,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936889648s@ mbc={}] start_peering_interval up [2,3,1] -> [4,3,5], acting [2,3,1] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078654289s) [4,3,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936889648s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078597069s) [5,1,0] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936889648s@ mbc={}] start_peering_interval up [2,3,1] -> [5,1,0], acting [2,3,1] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078569412s) [5,1,0] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936889648s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119358063s) [2,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978027344s@ mbc={}] start_peering_interval up [3,2,1] -> [2,4,3], acting [3,2,1] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078669548s) [5,4,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937255859s@ mbc={}] start_peering_interval up [2,3,1] -> [5,4,3], acting [2,3,1] -> [5,4,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078611374s) [5,4,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937255859s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119143486s) [4,3,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [4,3,5], acting [3,2,1] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119274139s) [2,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978027344s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119091988s) [4,3,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078068733s) [3,4,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936889648s@ mbc={}] start_peering_interval up [2,3,1] -> [3,4,2], acting [2,3,1] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118975639s) [2,3,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [2,3,4], acting [3,2,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118871689s) [5,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977783203s@ mbc={}] start_peering_interval up [3,2,1] -> [5,4,3], acting [3,2,1] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.078040123s) [3,4,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936889648s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118906975s) [2,3,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118807793s) [5,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977783203s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118756294s) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977783203s@ mbc={}] start_peering_interval up [3,2,1] -> [1,3,2], acting [3,2,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118756294s) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1195.977783203s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077703476s) [1,0,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936767578s@ mbc={}] start_peering_interval up [2,3,1] -> [1,0,5], acting [2,3,1] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118686676s) [4,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [4,3,2], acting [3,2,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077703476s) [1,0,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.936767578s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077903748s) [5,0,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937133789s@ mbc={}] start_peering_interval up [2,3,1] -> [5,0,1], acting [2,3,1] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077874184s) [5,0,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937133789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118606567s) [4,2,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [4,2,0], acting [3,2,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118579865s) [4,2,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077537537s) [1,0,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936889648s@ mbc={}] start_peering_interval up [2,3,1] -> [1,0,2], acting [2,3,1] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118394852s) [3,1,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [3,1,5], acting [3,2,1] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077635765s) [2,0,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937133789s@ mbc={}] start_peering_interval up [2,3,1] -> [2,0,1], acting [2,3,1] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077236176s) [4,5,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936645508s@ mbc={}] start_peering_interval up [2,3,1] -> [4,5,3], acting [2,3,1] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118655205s) [4,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077547073s) [2,0,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937133789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077205658s) [4,5,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936645508s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118246078s) [3,1,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077047348s) [0,4,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936645508s@ mbc={}] start_peering_interval up [2,3,1] -> [0,4,2], acting [2,3,1] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118119240s) [4,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977905273s@ mbc={}] start_peering_interval up [3,2,1] -> [4,5,0], acting [3,2,1] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077537537s) [1,0,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.936889648s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077016830s) [0,4,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936645508s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118022919s) [5,1,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977783203s@ mbc={}] start_peering_interval up [3,2,1] -> [5,1,3], acting [3,2,1] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117995262s) [5,1,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977783203s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118089676s) [4,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.076575279s) [3,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.936523438s@ mbc={}] start_peering_interval up [2,3,1] -> [3,5,4], acting [2,3,1] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118220329s) [5,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.978149414s@ mbc={}] start_peering_interval up [3,2,1] -> [5,4,3], acting [3,2,1] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118146896s) [5,4,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.978149414s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077160835s) [0,2,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937255859s@ mbc={}] start_peering_interval up [2,3,1] -> [0,2,1], acting [2,3,1] -> [0,2,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.076536179s) [3,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.936523438s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077054024s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937133789s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,4], acting [2,3,1] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077122688s) [0,2,1] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937255859s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077021599s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937133789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117514610s) [1,5,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977783203s@ mbc={}] start_peering_interval up [3,2,1] -> [1,5,3], acting [3,2,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117514610s) [1,5,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1195.977783203s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117497444s) [4,3,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1195.977783203s@ mbc={}] start_peering_interval up [3,2,1] -> [4,3,5], acting [3,2,1] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.077035904s) [4,0,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937377930s@ mbc={}] start_peering_interval up [2,3,1] -> [4,0,5], acting [2,3,1] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117463112s) [4,3,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1195.977783203s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.076959610s) [4,2,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.937255859s@ mbc={}] start_peering_interval up [2,3,1] -> [4,2,3], acting [2,3,1] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.076994896s) [4,0,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937377930s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.076931953s) [4,2,3] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.937255859s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.1e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,0,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143380165s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.464965820s@ mbc={}] start_peering_interval up [3,4,2] -> [4,3,5], acting [3,4,2] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143380165s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.464965820s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.149876595s) [2,3,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.471923828s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,4], acting [3,4,2] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.149847984s) [2,3,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.471923828s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,5,3] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151362419s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.473632812s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,1], acting [3,4,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151748657s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.473754883s@ mbc={}] start_peering_interval up [3,4,2] -> [4,3,5], acting [3,4,2] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151329994s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.473632812s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151748657s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.473754883s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151420593s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474121094s@ mbc={}] start_peering_interval up [3,4,2] -> [0,1,5], acting [3,4,2] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151392937s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474121094s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151520729s) [0,4,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474365234s@ mbc={}] start_peering_interval up [3,4,2] -> [0,4,5], acting [3,4,2] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,3,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151453972s) [0,4,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474365234s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,3,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.149108887s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474975586s@ mbc={}] start_peering_interval up [3,4,2] -> [4,5,0], acting [3,4,2] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.149108887s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.474975586s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,3,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,3,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,2,3] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.148295403s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475585938s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,1], acting [3,4,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,0,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.148157120s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475585938s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,2,3] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.147190094s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475219727s@ mbc={}] start_peering_interval up [3,4,2] -> [1,2,3], acting [3,4,2] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.147154808s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475219727s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.17( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145078659s) [5,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.473266602s@ mbc={}] start_peering_interval up [3,4,2] -> [5,4,0], acting [3,4,2] -> [5,4,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125096321s) [3,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.454101562s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,1], acting [4,5,3] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125060081s) [3,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.454101562s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144255638s) [5,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.473266602s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145651817s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474731445s@ mbc={}] start_peering_interval up [3,4,2] -> [1,0,2], acting [3,4,2] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146964073s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475219727s@ mbc={}] start_peering_interval up [3,4,2] -> [0,1,5], acting [3,4,2] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142303467s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.471679688s@ mbc={}] start_peering_interval up [3,4,2] -> [5,1,3], acting [3,4,2] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145851135s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475219727s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142255783s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.471679688s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145586967s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474731445s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144974709s) [4,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474609375s@ mbc={}] start_peering_interval up [3,4,2] -> [4,2,0], acting [3,4,2] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144974709s) [4,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.474609375s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145083427s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475097656s@ mbc={}] start_peering_interval up [3,4,2] -> [4,5,0], acting [3,4,2] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145083427s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.475097656s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145640373s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475708008s@ mbc={}] start_peering_interval up [3,4,2] -> [5,0,1], acting [3,4,2] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144810677s) [5,1,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474975586s@ mbc={}] start_peering_interval up [3,4,2] -> [5,1,0], acting [3,4,2] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144786835s) [5,1,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474975586s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145592690s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475708008s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138394356s) [4,0,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468627930s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,5], acting [4,5,3] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138394356s) [4,0,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.468627930s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.123455048s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453857422s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,0], acting [4,5,3] -> [1,5,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.123405457s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453857422s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144408226s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474975586s@ mbc={}] start_peering_interval up [3,4,2] -> [5,1,3], acting [3,4,2] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144368172s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474975586s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137830734s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468627930s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,1], acting [4,5,3] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143778801s) [2,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474609375s@ mbc={}] start_peering_interval up [3,4,2] -> [2,4,0], acting [3,4,2] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137785912s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.468627930s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143693924s) [2,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474609375s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.144030571s) [2,3,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475219727s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,4], acting [3,4,2] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143242836s) [5,0,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474365234s@ mbc={}] start_peering_interval up [3,4,2] -> [5,0,4], acting [3,4,2] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.13( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143980980s) [2,3,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475219727s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143050194s) [5,0,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474365234s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143648148s) [2,4,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475219727s@ mbc={}] start_peering_interval up [3,4,2] -> [2,4,3], acting [3,4,2] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143589020s) [2,4,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475219727s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122098923s) [3,5,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453735352s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122026443s) [3,5,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453735352s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120986938s) [4,2,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453002930s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.16( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138751030s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470703125s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,0], acting [4,5,3] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120986938s) [4,2,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.453002930s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.16( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138751030s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.470703125s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138138771s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470458984s@ mbc={}] start_peering_interval up [4,5,3] -> [3,1,5], acting [4,5,3] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136285782s) [4,0,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468627930s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,5], acting [4,5,3] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138081551s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.470458984s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136285782s) [4,0,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.468627930s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121383667s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.454101562s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141434669s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.474121094s@ mbc={}] start_peering_interval up [3,4,2] -> [1,2,3], acting [3,4,2] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143140793s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475830078s@ mbc={}] start_peering_interval up [3,4,2] -> [1,2,0], acting [3,4,2] -> [1,2,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135103226s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468017578s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120973587s) [2,1,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.454345703s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,3], acting [4,5,3] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120061874s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.454101562s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141788483s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475830078s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140748024s) [0,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475097656s@ mbc={}] start_peering_interval up [3,4,2] -> [0,5,4], acting [3,4,2] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133703232s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.468017578s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140693665s) [0,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475097656s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135482788s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470214844s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.11( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135468483s) [4,3,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470092773s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135449409s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.470214844s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.11( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135468483s) [4,3,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.470092773s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137051582s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.471923828s@ mbc={}] start_peering_interval up [3,4,2] -> [3,5,4], acting [3,4,2] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137019157s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.471923828s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137992859s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.472900391s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,5], acting [3,4,2] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137960434s) [3,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.472900391s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118138313s) [2,4,0] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453369141s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118044853s) [2,4,0] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453369141s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.18( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133289337s) [4,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468627930s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.18( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133289337s) [4,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.468627930s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117311478s) [3,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453002930s@ mbc={}] start_peering_interval up [4,5,3] -> [3,1,2], acting [4,5,3] -> [3,1,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.b( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117244720s) [3,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453002930s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139858246s) [2,0,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.475830078s@ mbc={}] start_peering_interval up [3,4,2] -> [2,0,4], acting [3,4,2] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.10( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134098053s) [4,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470092773s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,2], acting [4,5,3] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139813423s) [2,0,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.475830078s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.10( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134098053s) [4,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.470092773s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.130730629s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.467041016s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117664337s) [3,2,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453857422s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.117633820s) [3,2,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453857422s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115000725s) [4,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451416016s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,0], acting [4,5,3] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115000725s) [4,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.451416016s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133328438s) [5,4,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.469848633s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133299828s) [5,4,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.469848633s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.204997063s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.541625977s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.116666794s) [1,0,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453369141s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.204960823s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.541625977s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.116624832s) [1,0,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453369141s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.131764412s) [5,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468750000s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.14( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133219719s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470336914s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,0], acting [4,5,3] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.131720543s) [5,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.468750000s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.14( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133219719s) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.470336914s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.131727219s) [4,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.469116211s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.116601944s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.454101562s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.131727219s) [4,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.469116211s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.116561890s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.454101562s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.203359604s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.540893555s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.203328133s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.540893555s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115244865s) [1,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.452880859s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,2], acting [4,5,3] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115211487s) [1,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.452880859s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.129284859s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.467041016s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.129258156s) [1,5,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.467041016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.203437805s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.541381836s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.203392029s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.541381836s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114430428s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.452758789s@ mbc={}] start_peering_interval up [4,5,3] -> [0,5,1], acting [4,5,3] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114393234s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.452758789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128238678s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.466796875s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128195763s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.466796875s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.113833427s) [3,5,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.452636719s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.113793373s) [3,5,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.452636719s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.130790710s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.469848633s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.130790710s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.469848633s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.1( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.196764946s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.536132812s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114919662s) [2,1,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.454345703s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.1( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.196686745s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.536132812s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139995575s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.474121094s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.130692482s) [3,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.467041016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128019333s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.467651367s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127977371s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.467651367s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126836777s) [5,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.466796875s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112035751s) [5,4,0] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451904297s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,0], acting [4,5,3] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112969398s) [3,2,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453002930s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.111989975s) [5,4,0] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451904297s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112939835s) [3,2,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453002930s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126624107s) [5,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.466674805s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,0], acting [4,5,3] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126587868s) [5,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.466674805s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126250267s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.466552734s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126250267s) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.466552734s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112703323s) [5,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453125000s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,1], acting [4,5,3] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.201242447s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.541625977s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112649918s) [5,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453125000s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128849030s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.469360352s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133377075s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.473999023s@ mbc={}] start_peering_interval up [3,4,2] -> [3,5,4], acting [3,4,2] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128813744s) [5,1,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.469360352s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133356094s) [3,5,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.473999023s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112385750s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453125000s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.125704765s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.466552734s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.113100052s) [2,1,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453857422s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,3], acting [4,5,3] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.125672340s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.466552734s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112322807s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453125000s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.113074303s) [2,1,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453857422s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.125824928s) [5,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.466796875s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128849983s) [2,3,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.469848633s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,4], acting [4,5,3] -> [2,3,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128824234s) [2,3,4] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.469848633s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.200315475s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.541381836s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.200281143s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.541381836s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.200074196s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.541259766s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.200037956s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.541259766s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110915184s) [1,2,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.452270508s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110857010s) [1,2,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.452270508s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110529900s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451904297s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110451698s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451904297s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110422134s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451904297s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110409737s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451904297s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.125692368s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.467285156s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110588074s) [1,3,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.452392578s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,5], acting [4,5,3] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.125642776s) [1,2,3] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.467285156s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110543251s) [1,3,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.452392578s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.201176643s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.541625977s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126726151s) [2,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.468750000s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.199760437s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1187.541870117s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.126678467s) [2,4,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.468750000s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.199723244s) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.541870117s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109580040s) [1,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451660156s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,2], acting [4,5,3] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109530449s) [1,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451660156s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.13( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127932549s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470092773s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127345085s) [0,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.469726562s@ mbc={}] start_peering_interval up [4,5,3] -> [0,2,1], acting [4,5,3] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127310753s) [0,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.469726562s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109255791s) [2,3,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451660156s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.108989716s) [3,1,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451416016s@ mbc={}] start_peering_interval up [4,5,3] -> [3,1,5], acting [4,5,3] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109213829s) [3,4,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451660156s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109388351s) [1,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.451904297s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,2], acting [4,5,3] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.108949661s) [3,1,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451416016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109173775s) [3,4,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451660156s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109358788s) [1,3,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451904297s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109210014s) [2,3,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.451660156s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127704620s) [3,4,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470336914s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127675056s) [3,4,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.470336914s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.104427338s) [1,2,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.447143555s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127854347s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.470703125s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.104393005s) [1,2,3] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.447143555s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127817154s) [2,3,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.470703125s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[6.13( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.127903938s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.470092773s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110170364s) [2,1,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.453002930s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110017776s) [2,1,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.453002930s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.121492386s) [1,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.464965820s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,0], acting [3,4,2] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.121249199s) [1,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.464965820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,0,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.1e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.19( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,0,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,3,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.1b( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,0] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,0,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,5,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.19( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,1,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,4,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,3,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.1f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,5,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [5,3,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,0,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201931000s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1195.541259766s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201931000s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown pruub 1195.541259766s@ mbc={}] state: transitioning to Primary Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,5,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,4,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,3,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.200033188s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1195.541015625s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.200033188s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown pruub 1195.541015625s@ mbc={}] state: transitioning to Primary Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [5,4,3] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.199896812s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1195.541381836s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.199896812s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown pruub 1195.541381836s@ mbc={}] state: transitioning to Primary Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,2,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.1c( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,1,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.2( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.199402809s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1195.541503906s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[7.2( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.199402809s) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown pruub 1195.541503906s@ mbc={}] state: transitioning to Primary Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,4,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,4,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,3,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,1,0] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,3,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,1,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,0,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,0,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,4,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,1,3] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,2,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,2,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.15( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,3,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.10( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,3,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,3] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.1a( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,0,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[6.12( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,1] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,2,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[5.b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.3( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,2,4] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.6( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 43 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,4] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,0,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [1,0,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [1,0,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31409]: osd.1 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.2( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,3,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,5,3] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,0,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,2,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.b( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,3,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,3,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,3,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,3] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,3,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,2,3] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,2,3] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,3,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,2,3] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,3,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,0,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,5,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32364]: osd.4 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:46 localhost ceph-osd[32364]: osd.4 pg_epoch: 45 pg[7.2( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:46 localhost ceph-osd[32364]: osd.4 pg_epoch: 45 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:04:46 localhost ceph-osd[32364]: osd.4 pg_epoch: 45 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:46 localhost ceph-osd[32364]: osd.4 pg_epoch: 45 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=0 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:04:48 localhost python3[56547]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:49 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.19 scrub starts Oct 5 04:04:50 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts Oct 5 04:04:50 localhost python3[56563]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:04:52 localhost systemd[1]: tmp-crun.K8qNsD.mount: Deactivated successfully. Oct 5 04:04:52 localhost podman[56564]: 2025-10-05 08:04:52.68435276 +0000 UTC m=+0.097210638 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Oct 5 04:04:52 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts Oct 5 04:04:52 localhost python3[56590]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:52 localhost podman[56564]: 2025-10-05 08:04:52.896666892 +0000 UTC m=+0.309524750 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, name=rhosp17/openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Oct 5 04:04:52 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:04:52 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok Oct 5 04:04:54 localhost python3[56657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:04:55 localhost python3[56700]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651494.6484375-93302-12483509232902/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=d68e0db228a7d8458c08a66635a19e112f8e9d34 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:57 localhost ceph-osd[31409]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:57 localhost ceph-osd[31409]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.849872589s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1204.370361328s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.849775314s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.370361328s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.848984718s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1204.370117188s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[31409]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.848777771s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.370117188s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[31409]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.848267555s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1204.370117188s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.848697662s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1204.370727539s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.848496437s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.370727539s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[32364]: osd.4 pg_epoch: 46 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.848092079s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.370117188s@ mbc={}] state: transitioning to Stray Oct 5 04:04:58 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 2.6 scrub starts Oct 5 04:04:58 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 2.6 scrub ok Oct 5 04:04:59 localhost ceph-osd[31409]: osd.1 pg_epoch: 47 pg[7.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Oct 5 04:04:59 localhost ceph-osd[31409]: osd.1 pg_epoch: 47 pg[7.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state: react AllReplicasActivated Activating complete Oct 5 04:04:59 localhost ceph-osd[31409]: osd.1 pg_epoch: 47 pg[7.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:04:59 localhost ceph-osd[31409]: osd.1 pg_epoch: 47 pg[7.7( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=0 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:04:59 localhost ceph-osd[32364]: osd.4 pg_epoch: 48 pg[7.4( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.970039368s) [1,5,0] r=-1 lpr=48 pi=[41,48)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1203.541748047s@ mbc={}] start_peering_interval up [5,0,4] -> [1,5,0], acting [5,0,4] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:59 localhost ceph-osd[32364]: osd.4 pg_epoch: 48 pg[7.4( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.969923019s) [1,5,0] r=-1 lpr=48 pi=[41,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1203.541748047s@ mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[31409]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48) [1,5,0] r=0 lpr=48 pi=[41,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:59 localhost ceph-osd[32364]: osd.4 pg_epoch: 48 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.967925072s) [1,5,0] r=-1 lpr=48 pi=[41,48)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1203.541748047s@ mbc={}] start_peering_interval up [5,0,4] -> [1,5,0], acting [5,0,4] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:59 localhost ceph-osd[32364]: osd.4 pg_epoch: 48 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.967802048s) [1,5,0] r=-1 lpr=48 pi=[41,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1203.541748047s@ mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[31409]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48) [1,5,0] r=0 lpr=48 pi=[41,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:00 localhost python3[56762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:05:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4204 writes, 19K keys, 4204 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4204 writes, 285 syncs, 14.75 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 949 writes, 3654 keys, 949 commit groups, 1.0 writes per commit group, ingest: 1.52 MB, 0.00 MB/s#012Interval WAL: 949 writes, 142 syncs, 6.68 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 5 04:05:00 localhost python3[56805]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651499.9199657-93302-164173565516164/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=e73c1aa4a58d9801d80c3db0f6e886adadfd04c0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:00 localhost ceph-osd[31409]: osd.1 pg_epoch: 49 pg[7.4( v 33'39 lc 33'8 (0'0,33'39] local-lis/les=48/49 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48) [1,5,0] r=0 lpr=48 pi=[41,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(2+1)=4}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:00 localhost ceph-osd[31409]: osd.1 pg_epoch: 49 pg[7.c( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48) [1,5,0] r=0 lpr=48 pi=[41,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:01 localhost ceph-osd[32364]: osd.4 pg_epoch: 50 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.744841576s) [3,4,2] r=1 lpr=50 pi=[43,50)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1212.370605469s@ mbc={}] start_peering_interval up [2,0,4] -> [3,4,2], acting [2,0,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:01 localhost ceph-osd[32364]: osd.4 pg_epoch: 50 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.744737625s) [3,4,2] r=1 lpr=50 pi=[43,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1212.370605469s@ mbc={}] state: transitioning to Stray Oct 5 04:05:01 localhost ceph-osd[32364]: osd.4 pg_epoch: 50 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.743717194s) [3,4,2] r=1 lpr=50 pi=[43,50)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1212.370117188s@ mbc={}] start_peering_interval up [2,0,4] -> [3,4,2], acting [2,0,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:01 localhost ceph-osd[32364]: osd.4 pg_epoch: 50 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.743629456s) [3,4,2] r=1 lpr=50 pi=[43,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1212.370117188s@ mbc={}] state: transitioning to Stray Oct 5 04:05:01 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.0 scrub starts Oct 5 04:05:01 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.0 scrub ok Oct 5 04:05:02 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts Oct 5 04:05:02 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok Oct 5 04:05:04 localhost ceph-osd[32364]: osd.4 pg_epoch: 52 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.117703438s) [1,2,3] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1213.392700195s@ mbc={255={}}] start_peering_interval up [4,3,5] -> [1,2,3], acting [4,3,5] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:04 localhost ceph-osd[32364]: osd.4 pg_epoch: 52 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.117362976s) [1,2,3] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1213.392456055s@ mbc={255={}}] start_peering_interval up [4,3,5] -> [1,2,3], acting [4,3,5] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:04 localhost ceph-osd[32364]: osd.4 pg_epoch: 52 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.117202759s) [1,2,3] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1213.392456055s@ mbc={}] state: transitioning to Stray Oct 5 04:05:04 localhost ceph-osd[32364]: osd.4 pg_epoch: 52 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.117325783s) [1,2,3] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1213.392700195s@ mbc={}] state: transitioning to Stray Oct 5 04:05:04 localhost ceph-osd[31409]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52) [1,2,3] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:04 localhost ceph-osd[31409]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52) [1,2,3] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:04 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.4 scrub starts Oct 5 04:05:04 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.4 scrub ok Oct 5 04:05:04 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.1d scrub starts Oct 5 04:05:04 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.1d scrub ok Oct 5 04:05:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:05:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4737 writes, 21K keys, 4737 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4737 writes, 419 syncs, 11.31 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1353 writes, 5305 keys, 1353 commit groups, 1.0 writes per commit group, ingest: 2.01 MB, 0.00 MB/s#012Interval WAL: 1353 writes, 224 syncs, 6.04 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Oct 5 04:05:05 localhost python3[56867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:05 localhost ceph-osd[31409]: osd.1 pg_epoch: 53 pg[7.e( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=52/53 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52) [1,2,3] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:05 localhost ceph-osd[31409]: osd.1 pg_epoch: 53 pg[7.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=2 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52) [1,2,3] r=0 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:05 localhost python3[56910]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651505.0090172-93302-8305126080171/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=83e4275d7d1daa1c790a878bb63e3d5916f491b2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:06 localhost ceph-osd[31409]: osd.1 pg_epoch: 54 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.692909241s) [2,0,1] r=2 lpr=54 pi=[46,54)/1 crt=33'39 mlcod 0'0 active pruub 1214.548095703s@ mbc={255={}}] start_peering_interval up [1,2,3] -> [2,0,1], acting [1,2,3] -> [2,0,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:06 localhost ceph-osd[31409]: osd.1 pg_epoch: 54 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.692617416s) [2,0,1] r=2 lpr=54 pi=[46,54)/1 crt=33'39 mlcod 0'0 active pruub 1214.548095703s@ mbc={255={}}] start_peering_interval up [1,2,3] -> [2,0,1], acting [1,2,3] -> [2,0,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:06 localhost ceph-osd[31409]: osd.1 pg_epoch: 54 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.692623138s) [2,0,1] r=2 lpr=54 pi=[46,54)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1214.548095703s@ mbc={}] state: transitioning to Stray Oct 5 04:05:06 localhost ceph-osd[31409]: osd.1 pg_epoch: 54 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.692469597s) [2,0,1] r=2 lpr=54 pi=[46,54)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1214.548095703s@ mbc={}] state: transitioning to Stray Oct 5 04:05:06 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.1f scrub starts Oct 5 04:05:06 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.1f scrub ok Oct 5 04:05:06 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts Oct 5 04:05:06 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok Oct 5 04:05:07 localhost ceph-osd[32364]: osd.4 pg_epoch: 55 pg[7.8( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=9.222810745s) [1,2,3] r=-1 lpr=55 pi=[41,55)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1211.542846680s@ mbc={}] start_peering_interval up [5,0,4] -> [1,2,3], acting [5,0,4] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:07 localhost ceph-osd[32364]: osd.4 pg_epoch: 55 pg[7.8( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=9.222712517s) [1,2,3] r=-1 lpr=55 pi=[41,55)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1211.542846680s@ mbc={}] state: transitioning to Stray Oct 5 04:05:07 localhost ceph-osd[31409]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55) [1,2,3] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:07 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.6 scrub starts Oct 5 04:05:08 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.6 scrub ok Oct 5 04:05:08 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.1e scrub starts Oct 5 04:05:08 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.1e scrub ok Oct 5 04:05:08 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.1f deep-scrub starts Oct 5 04:05:08 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.1f deep-scrub ok Oct 5 04:05:09 localhost ceph-osd[31409]: osd.1 pg_epoch: 56 pg[7.8( v 33'39 (0'0,33'39] local-lis/les=55/56 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55) [1,2,3] r=0 lpr=55 pi=[41,55)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:05:09 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.f scrub starts Oct 5 04:05:09 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.f scrub ok Oct 5 04:05:09 localhost ceph-osd[32364]: osd.4 pg_epoch: 57 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=15.623806953s) [4,0,2] r=0 lpr=57 pi=[43,57)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1220.371337891s@ mbc={}] start_peering_interval up [2,0,4] -> [4,0,2], acting [2,0,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:09 localhost ceph-osd[32364]: osd.4 pg_epoch: 57 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=15.623806953s) [4,0,2] r=0 lpr=57 pi=[43,57)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown pruub 1220.371337891s@ mbc={}] state: transitioning to Primary Oct 5 04:05:10 localhost ceph-osd[32364]: osd.4 pg_epoch: 58 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=57/58 n=1 ec=41/31 lis/c=43/43 les/c/f=44/44/0 sis=57) [4,0,2] r=0 lpr=57 pi=[43,57)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:05:11 localhost python3[56972]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:12 localhost python3[57017]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651511.4943516-93818-78808523328210/source _original_basename=tmp6c2lcd8u follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:12 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.6 scrub starts Oct 5 04:05:12 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.6 scrub ok Oct 5 04:05:13 localhost python3[57079]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:13 localhost python3[57122]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651513.162503-93963-132020613830868/source _original_basename=tmpu1bqnt74 follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:14 localhost python3[57152]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Oct 5 04:05:14 localhost python3[57170]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:05:16 localhost ansible-async_wrapper.py[57342]: Invoked with 61827793336 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651515.9022706-94119-137846721842764/AnsiballZ_command.py _ Oct 5 04:05:16 localhost ansible-async_wrapper.py[57345]: Starting module and watcher Oct 5 04:05:16 localhost ansible-async_wrapper.py[57345]: Start watching 57346 (3600) Oct 5 04:05:16 localhost ansible-async_wrapper.py[57346]: Start module (57346) Oct 5 04:05:16 localhost ansible-async_wrapper.py[57342]: Return async_wrapper task started. Oct 5 04:05:16 localhost python3[57363]: ansible-ansible.legacy.async_status Invoked with jid=61827793336.57342 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:05:18 localhost ceph-osd[32364]: osd.4 pg_epoch: 59 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=59 pruub=8.250276566s) [3,4,5] r=1 lpr=59 pi=[44,59)/1 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1221.392700195s@ mbc={}] start_peering_interval up [4,3,5] -> [3,4,5], acting [4,3,5] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:18 localhost ceph-osd[32364]: osd.4 pg_epoch: 59 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=59 pruub=8.250171661s) [3,4,5] r=1 lpr=59 pi=[44,59)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1221.392700195s@ mbc={}] state: transitioning to Stray Oct 5 04:05:18 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.1e scrub starts Oct 5 04:05:18 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.1e scrub ok Oct 5 04:05:20 localhost ceph-osd[31409]: osd.1 pg_epoch: 61 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/48/0 sis=61 pruub=11.049955368s) [1,2,0] r=0 lpr=61 pi=[46,61)/1 crt=33'39 mlcod 0'0 active pruub 1230.549194336s@ mbc={255={}}] start_peering_interval up [1,2,3] -> [1,2,0], acting [1,2,3] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:20 localhost ceph-osd[31409]: osd.1 pg_epoch: 61 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/48/0 sis=61 pruub=11.049955368s) [1,2,0] r=0 lpr=61 pi=[46,61)/1 crt=33'39 mlcod 0'0 unknown pruub 1230.549194336s@ mbc={}] state: transitioning to Primary Oct 5 04:05:20 localhost puppet-user[57366]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:05:20 localhost puppet-user[57366]: (file: /etc/puppet/hiera.yaml) Oct 5 04:05:20 localhost puppet-user[57366]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:05:20 localhost puppet-user[57366]: (file & line not available) Oct 5 04:05:20 localhost puppet-user[57366]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:05:20 localhost puppet-user[57366]: (file & line not available) Oct 5 04:05:20 localhost puppet-user[57366]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:05:20 localhost puppet-user[57366]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:05:20 localhost puppet-user[57366]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.12 seconds Oct 5 04:05:20 localhost puppet-user[57366]: Notice: Applied catalog in 0.05 seconds Oct 5 04:05:20 localhost puppet-user[57366]: Application: Oct 5 04:05:20 localhost puppet-user[57366]: Initial environment: production Oct 5 04:05:20 localhost puppet-user[57366]: Converged environment: production Oct 5 04:05:20 localhost puppet-user[57366]: Run mode: user Oct 5 04:05:20 localhost puppet-user[57366]: Changes: Oct 5 04:05:20 localhost puppet-user[57366]: Events: Oct 5 04:05:20 localhost puppet-user[57366]: Resources: Oct 5 04:05:20 localhost puppet-user[57366]: Total: 10 Oct 5 04:05:20 localhost puppet-user[57366]: Time: Oct 5 04:05:20 localhost puppet-user[57366]: Filebucket: 0.00 Oct 5 04:05:20 localhost puppet-user[57366]: Schedule: 0.00 Oct 5 04:05:20 localhost puppet-user[57366]: File: 0.00 Oct 5 04:05:20 localhost puppet-user[57366]: Exec: 0.00 Oct 5 04:05:20 localhost puppet-user[57366]: Augeas: 0.02 Oct 5 04:05:20 localhost puppet-user[57366]: Transaction evaluation: 0.04 Oct 5 04:05:20 localhost puppet-user[57366]: Catalog application: 0.05 Oct 5 04:05:20 localhost puppet-user[57366]: Config retrieval: 0.15 Oct 5 04:05:20 localhost puppet-user[57366]: Last run: 1759651520 Oct 5 04:05:20 localhost puppet-user[57366]: Total: 0.06 Oct 5 04:05:20 localhost puppet-user[57366]: Version: Oct 5 04:05:20 localhost puppet-user[57366]: Config: 1759651520 Oct 5 04:05:20 localhost puppet-user[57366]: Puppet: 7.10.0 Oct 5 04:05:20 localhost ansible-async_wrapper.py[57346]: Module complete (57346) Oct 5 04:05:20 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.5 scrub starts Oct 5 04:05:20 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.5 scrub ok Oct 5 04:05:21 localhost ceph-osd[31409]: osd.1 pg_epoch: 62 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=61/62 n=1 ec=41/31 lis/c=46/46 les/c/f=47/48/0 sis=61) [1,2,0] r=0 lpr=61 pi=[46,61)/1 crt=33'39 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:21 localhost ansible-async_wrapper.py[57345]: Done in kid B. Oct 5 04:05:22 localhost ceph-osd[31409]: osd.1 pg_epoch: 63 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=41/31 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=10.627257347s) [2,1,3] r=1 lpr=63 pi=[48,63)/1 crt=33'39 mlcod 0'0 active pruub 1232.177856445s@ mbc={255={}}] start_peering_interval up [1,5,0] -> [2,1,3], acting [1,5,0] -> [2,1,3], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:22 localhost ceph-osd[31409]: osd.1 pg_epoch: 63 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=41/31 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=10.626890182s) [2,1,3] r=1 lpr=63 pi=[48,63)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1232.177856445s@ mbc={}] state: transitioning to Stray Oct 5 04:05:22 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.1d scrub starts Oct 5 04:05:22 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.1d scrub ok Oct 5 04:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:05:23 localhost systemd[1]: tmp-crun.qGKg15.mount: Deactivated successfully. Oct 5 04:05:23 localhost podman[57592]: 2025-10-05 08:05:23.685350155 +0000 UTC m=+0.090784199 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:05:23 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts Oct 5 04:05:23 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok Oct 5 04:05:23 localhost podman[57592]: 2025-10-05 08:05:23.904897179 +0000 UTC m=+0.310331213 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:05:23 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:05:24 localhost ceph-osd[32364]: osd.4 pg_epoch: 65 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=41/31 lis/c=50/50 les/c/f=51/51/0 sis=65 pruub=10.394072533s) [2,4,0] r=1 lpr=65 pi=[50,65)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1229.681762695s@ mbc={}] start_peering_interval up [3,4,2] -> [2,4,0], acting [3,4,2] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:24 localhost ceph-osd[32364]: osd.4 pg_epoch: 65 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=41/31 lis/c=50/50 les/c/f=51/51/0 sis=65 pruub=10.393972397s) [2,4,0] r=1 lpr=65 pi=[50,65)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1229.681762695s@ mbc={}] state: transitioning to Stray Oct 5 04:05:24 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.a scrub starts Oct 5 04:05:26 localhost ceph-osd[31409]: osd.1 pg_epoch: 67 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=41/31 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=11.007642746s) [1,0,5] r=0 lpr=67 pi=[52,67)/1 crt=33'39 mlcod 0'0 active pruub 1236.872070312s@ mbc={255={}}] start_peering_interval up [1,2,3] -> [1,0,5], acting [1,2,3] -> [1,0,5], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:26 localhost ceph-osd[31409]: osd.1 pg_epoch: 67 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=41/31 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=11.007642746s) [1,0,5] r=0 lpr=67 pi=[52,67)/1 crt=33'39 mlcod 0'0 unknown pruub 1236.872070312s@ mbc={}] state: transitioning to Primary Oct 5 04:05:27 localhost python3[57652]: ansible-ansible.legacy.async_status Invoked with jid=61827793336.57342 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 68 crush map has features 432629239337189376, adjusting msgr requires for clients Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 68 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 68 crush map has features 3314933000854323200, adjusting msgr requires for osds Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[4.6( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.049523354s) [0,4,2] r=-1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 active pruub 1240.921875000s@ mbc={}] start_peering_interval up [0,1,2] -> [0,4,2], acting [0,1,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[4.6( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.049454689s) [0,4,2] r=-1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1240.921875000s@ mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=37/37 les/c/f=38/38/0 sis=68 pruub=9.105810165s) [0,2,1] r=2 lpr=68 pi=[37,68)/1 crt=0'0 mlcod 0'0 active pruub 1235.979003906s@ mbc={}] start_peering_interval up [3,2,1] -> [0,2,1], acting [3,2,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=37/37 les/c/f=38/38/0 sis=68 pruub=9.105731964s) [0,2,1] r=2 lpr=68 pi=[37,68)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1235.979003906s@ mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[4.f( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.030076027s) [4,3,5] r=-1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 active pruub 1240.903564453s@ mbc={}] start_peering_interval up [1,3,5] -> [4,3,5], acting [1,3,5] -> [4,3,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[4.f( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.030006409s) [4,3,5] r=-1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1240.903564453s@ mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost ceph-osd[32364]: osd.4 68 crush map has features 432629239337189376, adjusting msgr requires for clients Oct 5 04:05:27 localhost ceph-osd[32364]: osd.4 68 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Oct 5 04:05:27 localhost ceph-osd[32364]: osd.4 68 crush map has features 3314933000854323200, adjusting msgr requires for osds Oct 5 04:05:27 localhost ceph-osd[32364]: osd.4 pg_epoch: 68 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68) [4,3,5] r=0 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:27 localhost ceph-osd[31409]: osd.1 pg_epoch: 68 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=41/31 lis/c=52/52 les/c/f=53/53/0 sis=67) [1,0,5] r=0 lpr=67 pi=[52,67)/1 crt=33'39 mlcod 0'0 active+degraded mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:27 localhost python3[57668]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:05:27 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.9 scrub starts Oct 5 04:05:27 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.9 scrub ok Oct 5 04:05:28 localhost python3[57684]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:05:28 localhost ceph-osd[32364]: osd.4 pg_epoch: 68 pg[4.6( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68) [0,4,2] r=1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:28 localhost python3[57734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:28 localhost python3[57752]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpu4970pro recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:05:29 localhost ceph-osd[32364]: osd.4 pg_epoch: 69 pg[4.f( empty local-lis/les=68/69 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68) [4,3,5] r=0 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:05:29 localhost python3[57782]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:29 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts Oct 5 04:05:29 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok Oct 5 04:05:30 localhost python3[57885]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:05:30 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.17 scrub starts Oct 5 04:05:30 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.17 scrub ok Oct 5 04:05:31 localhost python3[57904]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:32 localhost python3[57936]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:05:32 localhost python3[57986]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:33 localhost python3[58004]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:33 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.17 scrub starts Oct 5 04:05:33 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.17 scrub ok Oct 5 04:05:33 localhost python3[58066]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:33 localhost python3[58084]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:34 localhost ceph-osd[31409]: osd.1 pg_epoch: 70 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=41/31 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=13.040542603s) [1,3,5] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1246.892211914s@ mbc={}] start_peering_interval up [2,0,1] -> [1,3,5], acting [2,0,1] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:34 localhost ceph-osd[31409]: osd.1 pg_epoch: 70 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=41/31 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=13.040542603s) [1,3,5] r=0 lpr=70 pi=[54,70)/1 crt=33'39 mlcod 0'0 unknown pruub 1246.892211914s@ mbc={}] state: transitioning to Primary Oct 5 04:05:34 localhost python3[58146]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:34 localhost python3[58164]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:35 localhost python3[58226]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:35 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts Oct 5 04:05:35 localhost ceph-osd[31409]: osd.1 pg_epoch: 71 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=70/71 n=1 ec=41/31 lis/c=54/54 les/c/f=55/55/0 sis=70) [1,3,5] r=0 lpr=70 pi=[54,70)/1 crt=33'39 mlcod 0'0 active+degraded mbc={255={(1+2)=3}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:35 localhost python3[58244]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:35 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok Oct 5 04:05:36 localhost python3[58274]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:05:36 localhost systemd[1]: Reloading. Oct 5 04:05:36 localhost systemd-rc-local-generator[58299]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:05:36 localhost systemd-sysv-generator[58302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:05:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:05:36 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.16 scrub starts Oct 5 04:05:36 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.16 scrub ok Oct 5 04:05:36 localhost python3[58360]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:37 localhost python3[58378]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:37 localhost python3[58440]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:37 localhost python3[58458]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:38 localhost python3[58488]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:05:38 localhost systemd[1]: Reloading. Oct 5 04:05:38 localhost systemd-rc-local-generator[58512]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:05:38 localhost systemd-sysv-generator[58518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:05:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:05:38 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.14 scrub starts Oct 5 04:05:38 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 5.14 scrub ok Oct 5 04:05:38 localhost systemd[1]: Starting Create netns directory... Oct 5 04:05:38 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:05:38 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:05:38 localhost systemd[1]: Finished Create netns directory. Oct 5 04:05:39 localhost python3[58546]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:05:40 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts Oct 5 04:05:40 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok Oct 5 04:05:41 localhost python3[58603]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:05:41 localhost podman[58666]: 2025-10-05 08:05:41.468862158 +0000 UTC m=+0.080069610 container create eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37) Oct 5 04:05:41 localhost systemd[1]: Started libpod-conmon-eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc.scope. Oct 5 04:05:41 localhost podman[58666]: 2025-10-05 08:05:41.423982549 +0000 UTC m=+0.035190031 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:05:41 localhost podman[58689]: 2025-10-05 08:05:41.525723122 +0000 UTC m=+0.075715359 container create 5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=2) Oct 5 04:05:41 localhost systemd[1]: Started libcrun container. Oct 5 04:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7da012b8ac0fd067564e92349f80bc46f2da0551eb783c762923d47d0b2bb3/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:41 localhost podman[58666]: 2025-10-05 08:05:41.553703041 +0000 UTC m=+0.164910494 container init eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, config_id=tripleo_step2, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:05:41 localhost podman[58666]: 2025-10-05 08:05:41.563463123 +0000 UTC m=+0.174670565 container start eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step2, release=1, container_name=nova_compute_init_log, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 5 04:05:41 localhost python3[58603]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Oct 5 04:05:41 localhost systemd[1]: Started libpod-conmon-5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c.scope. Oct 5 04:05:41 localhost systemd[1]: libpod-eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc.scope: Deactivated successfully. Oct 5 04:05:41 localhost systemd[1]: Started libcrun container. Oct 5 04:05:41 localhost podman[58689]: 2025-10-05 08:05:41.489919015 +0000 UTC m=+0.039911232 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e650c9ef9dd9c7aa3816e57bd4817b570df2ab677b250f4cfc6482d4aec83d08/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:41 localhost podman[58689]: 2025-10-05 08:05:41.603827527 +0000 UTC m=+0.153819764 container init 5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=2, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, container_name=nova_virtqemud_init_logs, architecture=x86_64) Oct 5 04:05:41 localhost podman[58689]: 2025-10-05 08:05:41.612325044 +0000 UTC m=+0.162317281 container start 5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_virtqemud_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:05:41 localhost python3[58603]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Oct 5 04:05:41 localhost systemd[1]: libpod-5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c.scope: Deactivated successfully. Oct 5 04:05:41 localhost podman[58714]: 2025-10-05 08:05:41.635611222 +0000 UTC m=+0.053728867 container died eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T14:48:37, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, container_name=nova_compute_init_log, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 5 04:05:41 localhost podman[58736]: 2025-10-05 08:05:41.678508336 +0000 UTC m=+0.051791492 container died 5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., container_name=nova_virtqemud_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:05:41 localhost podman[58714]: 2025-10-05 08:05:41.715758103 +0000 UTC m=+0.133875708 container cleanup eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, config_id=tripleo_step2, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.9, io.openshift.expose-services=, container_name=nova_compute_init_log, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, release=1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:05:41 localhost systemd[1]: libpod-conmon-eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc.scope: Deactivated successfully. Oct 5 04:05:41 localhost podman[58746]: 2025-10-05 08:05:41.749255176 +0000 UTC m=+0.108020768 container cleanup 5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, release=2, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:05:41 localhost systemd[1]: libpod-conmon-5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c.scope: Deactivated successfully. Oct 5 04:05:41 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.14 scrub starts Oct 5 04:05:41 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.14 scrub ok Oct 5 04:05:42 localhost podman[58859]: 2025-10-05 08:05:42.109355564 +0000 UTC m=+0.083666301 container create f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=2, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=create_virtlogd_wrapper, io.buildah.version=1.33.12, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59) Oct 5 04:05:42 localhost systemd[1]: Started libpod-conmon-f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86.scope. Oct 5 04:05:42 localhost podman[58865]: 2025-10-05 08:05:42.142110866 +0000 UTC m=+0.099263195 container create 9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, config_id=tripleo_step2, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team) Oct 5 04:05:42 localhost systemd[1]: Started libcrun container. Oct 5 04:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e9de2b2ec23737f94de1f8bccf918a461ddca6ddb8186432fbf946e4c1bfc0/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:42 localhost podman[58859]: 2025-10-05 08:05:42.066264004 +0000 UTC m=+0.040574761 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:05:42 localhost podman[58859]: 2025-10-05 08:05:42.171107423 +0000 UTC m=+0.145418130 container init f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, build-date=2025-07-21T14:56:59, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=2, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:05:42 localhost podman[58859]: 2025-10-05 08:05:42.180546806 +0000 UTC m=+0.154857533 container start f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, tcib_managed=true, release=2, config_id=tripleo_step2, container_name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:05:42 localhost podman[58859]: 2025-10-05 08:05:42.180824034 +0000 UTC m=+0.155134731 container attach f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, container_name=create_virtlogd_wrapper, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, release=2, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 04:05:42 localhost systemd[1]: Started libpod-conmon-9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636.scope. Oct 5 04:05:42 localhost podman[58865]: 2025-10-05 08:05:42.089005958 +0000 UTC m=+0.046158327 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:05:42 localhost systemd[1]: Started libcrun container. Oct 5 04:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5632412383ed2f8b39d9b37ad8687100409b96a666fd14c004993158a48ed6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:42 localhost podman[58865]: 2025-10-05 08:05:42.212536307 +0000 UTC m=+0.169688656 container init 9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., container_name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}) Oct 5 04:05:42 localhost podman[58865]: 2025-10-05 08:05:42.222231087 +0000 UTC m=+0.179383446 container start 9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=create_haproxy_wrapper, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step2, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc.) Oct 5 04:05:42 localhost podman[58865]: 2025-10-05 08:05:42.222693169 +0000 UTC m=+0.179845518 container attach 9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, container_name=create_haproxy_wrapper, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step2, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5532d514d3c4b441f703691c3674e9266dd332fc302a03353f8fc9066e13b37c-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay-bd7da012b8ac0fd067564e92349f80bc46f2da0551eb783c762923d47d0b2bb3-merged.mount: Deactivated successfully. Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eabccaed4b2daee456895a431563dd07deb8021ef9244dbf314e154c2dcd4efc-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:42 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.11 scrub starts Oct 5 04:05:42 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.11 scrub ok Oct 5 04:05:43 localhost ovs-vsctl[58961]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Oct 5 04:05:44 localhost systemd[1]: libpod-f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86.scope: Deactivated successfully. Oct 5 04:05:44 localhost systemd[1]: libpod-f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86.scope: Consumed 2.164s CPU time. Oct 5 04:05:44 localhost podman[58859]: 2025-10-05 08:05:44.350673786 +0000 UTC m=+2.324984573 container died f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=2, container_name=create_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_id=tripleo_step2, io.buildah.version=1.33.12) Oct 5 04:05:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:44 localhost systemd[1]: var-lib-containers-storage-overlay-93e9de2b2ec23737f94de1f8bccf918a461ddca6ddb8186432fbf946e4c1bfc0-merged.mount: Deactivated successfully. Oct 5 04:05:44 localhost podman[59112]: 2025-10-05 08:05:44.458151518 +0000 UTC m=+0.094887592 container cleanup f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step2, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, container_name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container) Oct 5 04:05:44 localhost systemd[1]: libpod-conmon-f9ae4f4e11ddf05eff12ede486222d8f598159dec601e6528b4a419dc2a30b86.scope: Deactivated successfully. Oct 5 04:05:44 localhost python3[58603]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Oct 5 04:05:44 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.13 scrub starts Oct 5 04:05:44 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.13 scrub ok Oct 5 04:05:44 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.16 scrub starts Oct 5 04:05:44 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 4.16 scrub ok Oct 5 04:05:45 localhost systemd[1]: libpod-9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636.scope: Deactivated successfully. Oct 5 04:05:45 localhost systemd[1]: libpod-9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636.scope: Consumed 2.245s CPU time. Oct 5 04:05:45 localhost podman[58865]: 2025-10-05 08:05:45.406756264 +0000 UTC m=+3.363908613 container died 9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, container_name=create_haproxy_wrapper, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step2, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1, build-date=2025-07-21T16:28:53) Oct 5 04:05:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:45 localhost podman[59155]: 2025-10-05 08:05:45.509515456 +0000 UTC m=+0.090546253 container cleanup 9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git) Oct 5 04:05:45 localhost systemd[1]: libpod-conmon-9ea7e400220a3b2c1b6c16eb638fd7fa46d0be5f02ce6c9b9dd81c9f65fff636.scope: Deactivated successfully. Oct 5 04:05:45 localhost python3[58603]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Oct 5 04:05:45 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.18 scrub starts Oct 5 04:05:45 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.18 scrub ok Oct 5 04:05:46 localhost python3[59209]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:46 localhost systemd[1]: var-lib-containers-storage-overlay-7c5632412383ed2f8b39d9b37ad8687100409b96a666fd14c004993158a48ed6-merged.mount: Deactivated successfully. Oct 5 04:05:47 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts Oct 5 04:05:47 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok Oct 5 04:05:47 localhost python3[59330]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005471150 step=2 update_config_hash_only=False Oct 5 04:05:48 localhost python3[59346]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:48 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts Oct 5 04:05:48 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok Oct 5 04:05:48 localhost python3[59362]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:05:50 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts Oct 5 04:05:50 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok Oct 5 04:05:51 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.11 scrub starts Oct 5 04:05:51 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.11 scrub ok Oct 5 04:05:52 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts Oct 5 04:05:52 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok Oct 5 04:05:54 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts Oct 5 04:05:54 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok Oct 5 04:05:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:05:54 localhost podman[59363]: 2025-10-05 08:05:54.685165261 +0000 UTC m=+0.086132310 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, release=1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:05:54 localhost podman[59363]: 2025-10-05 08:05:54.876584071 +0000 UTC m=+0.277551130 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:05:54 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:05:56 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.7 scrub starts Oct 5 04:05:56 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 4.7 scrub ok Oct 5 04:05:58 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.10 scrub starts Oct 5 04:05:58 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.10 scrub ok Oct 5 04:06:01 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.4 scrub starts Oct 5 04:06:01 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 6.4 scrub ok Oct 5 04:06:02 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.3 scrub starts Oct 5 04:06:02 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.3 scrub ok Oct 5 04:06:02 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.1e scrub starts Oct 5 04:06:02 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.1e scrub ok Oct 5 04:06:03 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.4 scrub starts Oct 5 04:06:03 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.4 scrub ok Oct 5 04:06:03 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.2 scrub starts Oct 5 04:06:03 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.2 scrub ok Oct 5 04:06:06 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.19 scrub starts Oct 5 04:06:06 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.19 scrub ok Oct 5 04:06:09 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.6 scrub starts Oct 5 04:06:09 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.6 scrub ok Oct 5 04:06:10 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.8 scrub starts Oct 5 04:06:10 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.8 scrub ok Oct 5 04:06:14 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.18 scrub starts Oct 5 04:06:14 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.18 scrub ok Oct 5 04:06:16 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.b scrub starts Oct 5 04:06:16 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.b scrub ok Oct 5 04:06:16 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.1 scrub starts Oct 5 04:06:16 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.1 scrub ok Oct 5 04:06:17 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts Oct 5 04:06:17 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok Oct 5 04:06:18 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.f scrub starts Oct 5 04:06:18 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 7.f scrub ok Oct 5 04:06:19 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.19 scrub starts Oct 5 04:06:19 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.19 scrub ok Oct 5 04:06:19 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.7 scrub starts Oct 5 04:06:19 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.7 scrub ok Oct 5 04:06:22 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.9 scrub starts Oct 5 04:06:23 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.9 scrub ok Oct 5 04:06:23 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.6 scrub starts Oct 5 04:06:23 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.6 scrub ok Oct 5 04:06:24 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.17 scrub starts Oct 5 04:06:24 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 3.17 scrub ok Oct 5 04:06:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:06:25 localhost systemd[1]: tmp-crun.Y8dPvL.mount: Deactivated successfully. Oct 5 04:06:25 localhost podman[59471]: 2025-10-05 08:06:25.664836333 +0000 UTC m=+0.080825502 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:06:25 localhost podman[59471]: 2025-10-05 08:06:25.882112622 +0000 UTC m=+0.298101811 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public) Oct 5 04:06:25 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:06:27 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.1e scrub starts Oct 5 04:06:27 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.1e scrub ok Oct 5 04:06:29 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.1 scrub starts Oct 5 04:06:29 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.1 scrub ok Oct 5 04:06:30 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.a scrub starts Oct 5 04:06:30 localhost ceph-osd[31409]: log_channel(cluster) log [DBG] : 5.a scrub ok Oct 5 04:06:31 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.e scrub starts Oct 5 04:06:31 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.e scrub ok Oct 5 04:06:33 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.b scrub starts Oct 5 04:06:34 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 3.b scrub ok Oct 5 04:06:38 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.1f scrub starts Oct 5 04:06:38 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 2.1f scrub ok Oct 5 04:06:40 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 7.2 scrub starts Oct 5 04:06:40 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 7.2 scrub ok Oct 5 04:06:44 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts Oct 5 04:06:44 localhost ceph-osd[32364]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok Oct 5 04:06:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:06:56 localhost systemd[1]: tmp-crun.vLpoFA.mount: Deactivated successfully. Oct 5 04:06:56 localhost podman[59500]: 2025-10-05 08:06:56.701186402 +0000 UTC m=+0.109349045 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, version=17.1.9) Oct 5 04:06:56 localhost podman[59500]: 2025-10-05 08:06:56.904912994 +0000 UTC m=+0.313075617 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:06:56 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:07:26 localhost systemd[1]: tmp-crun.nAzzTP.mount: Deactivated successfully. Oct 5 04:07:26 localhost podman[59632]: 2025-10-05 08:07:26.767270942 +0000 UTC m=+0.107959627 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, RELEASE=main, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 04:07:26 localhost podman[59632]: 2025-10-05 08:07:26.883319847 +0000 UTC m=+0.224008532 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=553, RELEASE=main, version=7, maintainer=Guillaume Abrioux ) Oct 5 04:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:07:27 localhost podman[59667]: 2025-10-05 08:07:27.089071297 +0000 UTC m=+0.141509768 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=metrics_qdr) Oct 5 04:07:27 localhost podman[59667]: 2025-10-05 08:07:27.280424871 +0000 UTC m=+0.332863342 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:07:27 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:07:57 localhost systemd[1]: tmp-crun.yhbuJO.mount: Deactivated successfully. Oct 5 04:07:57 localhost podman[59803]: 2025-10-05 08:07:57.674134884 +0000 UTC m=+0.085896196 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, release=1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:07:57 localhost podman[59803]: 2025-10-05 08:07:57.888718163 +0000 UTC m=+0.300479425 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:07:57 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:08:28 localhost podman[59842]: 2025-10-05 08:08:28.678301858 +0000 UTC m=+0.080073669 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:08:28 localhost podman[59842]: 2025-10-05 08:08:28.839838993 +0000 UTC m=+0.241610734 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.9, container_name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, vcs-type=git, config_id=tripleo_step1) Oct 5 04:08:28 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:08:59 localhost podman[59936]: 2025-10-05 08:08:59.680649501 +0000 UTC m=+0.087484698 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, release=1, distribution-scope=public, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:08:59 localhost podman[59936]: 2025-10-05 08:08:59.91092193 +0000 UTC m=+0.317757147 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=) Oct 5 04:08:59 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:09:30 localhost podman[59979]: 2025-10-05 08:09:30.194528695 +0000 UTC m=+0.068475421 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:09:30 localhost podman[59979]: 2025-10-05 08:09:30.370636329 +0000 UTC m=+0.244583065 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step1, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd) Oct 5 04:09:30 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:10:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:10:00 localhost podman[60072]: 2025-10-05 08:10:00.680486866 +0000 UTC m=+0.091287435 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd) Oct 5 04:10:00 localhost podman[60072]: 2025-10-05 08:10:00.879884209 +0000 UTC m=+0.290684768 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:10:00 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:10:15 localhost python3[60148]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:15 localhost python3[60193]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651815.1606455-100170-62381212186683/source _original_basename=tmpvw8sn4wq follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:16 localhost python3[60223]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:18 localhost ansible-async_wrapper.py[60395]: Invoked with 481190136877 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651818.0922315-100484-30433388911181/AnsiballZ_command.py _ Oct 5 04:10:18 localhost ansible-async_wrapper.py[60398]: Starting module and watcher Oct 5 04:10:18 localhost ansible-async_wrapper.py[60398]: Start watching 60399 (3600) Oct 5 04:10:18 localhost ansible-async_wrapper.py[60399]: Start module (60399) Oct 5 04:10:18 localhost ansible-async_wrapper.py[60395]: Return async_wrapper task started. Oct 5 04:10:18 localhost python3[60417]: ansible-ansible.legacy.async_status Invoked with jid=481190136877.60395 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:10:22 localhost puppet-user[60419]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:10:22 localhost puppet-user[60419]: (file: /etc/puppet/hiera.yaml) Oct 5 04:10:22 localhost puppet-user[60419]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:10:22 localhost puppet-user[60419]: (file & line not available) Oct 5 04:10:22 localhost puppet-user[60419]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:10:22 localhost puppet-user[60419]: (file & line not available) Oct 5 04:10:22 localhost puppet-user[60419]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:10:22 localhost puppet-user[60419]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:10:22 localhost puppet-user[60419]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.15 seconds Oct 5 04:10:22 localhost puppet-user[60419]: Notice: Applied catalog in 0.03 seconds Oct 5 04:10:22 localhost puppet-user[60419]: Application: Oct 5 04:10:22 localhost puppet-user[60419]: Initial environment: production Oct 5 04:10:22 localhost puppet-user[60419]: Converged environment: production Oct 5 04:10:22 localhost puppet-user[60419]: Run mode: user Oct 5 04:10:22 localhost puppet-user[60419]: Changes: Oct 5 04:10:22 localhost puppet-user[60419]: Events: Oct 5 04:10:22 localhost puppet-user[60419]: Resources: Oct 5 04:10:22 localhost puppet-user[60419]: Total: 10 Oct 5 04:10:22 localhost puppet-user[60419]: Time: Oct 5 04:10:22 localhost puppet-user[60419]: Schedule: 0.00 Oct 5 04:10:22 localhost puppet-user[60419]: File: 0.00 Oct 5 04:10:22 localhost puppet-user[60419]: Exec: 0.01 Oct 5 04:10:22 localhost puppet-user[60419]: Augeas: 0.01 Oct 5 04:10:22 localhost puppet-user[60419]: Transaction evaluation: 0.03 Oct 5 04:10:22 localhost puppet-user[60419]: Catalog application: 0.03 Oct 5 04:10:22 localhost puppet-user[60419]: Config retrieval: 0.19 Oct 5 04:10:22 localhost puppet-user[60419]: Last run: 1759651822 Oct 5 04:10:22 localhost puppet-user[60419]: Filebucket: 0.00 Oct 5 04:10:22 localhost puppet-user[60419]: Total: 0.04 Oct 5 04:10:22 localhost puppet-user[60419]: Version: Oct 5 04:10:22 localhost puppet-user[60419]: Config: 1759651822 Oct 5 04:10:22 localhost puppet-user[60419]: Puppet: 7.10.0 Oct 5 04:10:22 localhost ansible-async_wrapper.py[60399]: Module complete (60399) Oct 5 04:10:23 localhost ansible-async_wrapper.py[60398]: Done in kid B. Oct 5 04:10:29 localhost python3[60545]: ansible-ansible.legacy.async_status Invoked with jid=481190136877.60395 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:10:30 localhost python3[60561]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:10:30 localhost python3[60577]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:31 localhost python3[60627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:10:31 localhost systemd[1]: tmp-crun.BETHZQ.mount: Deactivated successfully. Oct 5 04:10:31 localhost podman[60646]: 2025-10-05 08:10:31.259040011 +0000 UTC m=+0.090371222 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:10:31 localhost python3[60645]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpeax2by8i recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:10:31 localhost podman[60646]: 2025-10-05 08:10:31.493770337 +0000 UTC m=+0.325101608 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:10:31 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:10:31 localhost python3[60720]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:32 localhost python3[60870]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:10:33 localhost python3[60904]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:35 localhost python3[60936]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:35 localhost python3[60986]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:36 localhost python3[61004]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:36 localhost python3[61066]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:37 localhost python3[61084]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:37 localhost python3[61146]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:37 localhost python3[61164]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:38 localhost python3[61226]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:38 localhost python3[61244]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:39 localhost python3[61274]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:10:39 localhost systemd[1]: Reloading. Oct 5 04:10:39 localhost systemd-rc-local-generator[61298]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:39 localhost systemd-sysv-generator[61304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:40 localhost python3[61360]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:40 localhost python3[61378]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:41 localhost python3[61440]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:41 localhost python3[61458]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:42 localhost python3[61488]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:10:42 localhost systemd[1]: Reloading. Oct 5 04:10:42 localhost systemd-rc-local-generator[61519]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:42 localhost systemd-sysv-generator[61524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:42 localhost systemd[1]: Starting Create netns directory... Oct 5 04:10:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:10:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:10:42 localhost systemd[1]: Finished Create netns directory. Oct 5 04:10:42 localhost python3[61546]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:10:44 localhost python3[61602]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:10:44 localhost podman[61755]: 2025-10-05 08:10:44.806271483 +0000 UTC m=+0.079124898 container create 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., release=2, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=collectd, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:10:44 localhost podman[61785]: 2025-10-05 08:10:44.836713325 +0000 UTC m=+0.084218755 container create 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, container_name=rsyslog, tcib_managed=true, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64) Oct 5 04:10:44 localhost podman[61774]: 2025-10-05 08:10:44.849835989 +0000 UTC m=+0.107118683 container create 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, container_name=nova_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, distribution-scope=public) Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.scope. Oct 5 04:10:44 localhost podman[61775]: 2025-10-05 08:10:44.855725248 +0000 UTC m=+0.110893364 container create cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, container_name=nova_statedir_owner, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:10:44 localhost podman[61755]: 2025-10-05 08:10:44.763426126 +0000 UTC m=+0.036279551 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c1bb8c85867868083ee6bdbdf271740a2f1e31fdcf6b8f1d69303927ef66fa/merged/scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c1bb8c85867868083ee6bdbdf271740a2f1e31fdcf6b8f1d69303927ef66fa/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost podman[61774]: 2025-10-05 08:10:44.771494634 +0000 UTC m=+0.028777368 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:44 localhost podman[61807]: 2025-10-05 08:10:44.875322607 +0000 UTC m=+0.102171639 container create f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e.scope. Oct 5 04:10:44 localhost podman[61775]: 2025-10-05 08:10:44.78357994 +0000 UTC m=+0.038748106 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:10:44 localhost podman[61785]: 2025-10-05 08:10:44.793610831 +0000 UTC m=+0.041116271 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1185b83ff8634f30dfa6512fe63fd6c78768f55295507b45184dd0e5077ec610/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost podman[61755]: 2025-10-05 08:10:44.896830828 +0000 UTC m=+0.169684253 container init 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., release=2, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1185b83ff8634f30dfa6512fe63fd6c78768f55295507b45184dd0e5077ec610/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1185b83ff8634f30dfa6512fe63fd6c78768f55295507b45184dd0e5077ec610/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost podman[61775]: 2025-10-05 08:10:44.903014445 +0000 UTC m=+0.158182551 container init cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step3, name=rhosp17/openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, container_name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 04:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:10:44 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:44 localhost podman[61775]: 2025-10-05 08:10:44.924022323 +0000 UTC m=+0.179190429 container start cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:10:44 localhost podman[61775]: 2025-10-05 08:10:44.92468255 +0000 UTC m=+0.179850666 container attach cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, version=17.1.9, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, container_name=nova_statedir_owner, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:10:44 localhost podman[61807]: 2025-10-05 08:10:44.82577914 +0000 UTC m=+0.052628202 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope. Oct 5 04:10:44 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:10:44 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d.scope. Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58.scope. Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost systemd[1]: libpod-cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e.scope: Deactivated successfully. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2ba6fb6e64a4b58661c047727a0714e4aaa1299df5507383cf28a1ea2eccb4/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost podman[61775]: 2025-10-05 08:10:44.977805934 +0000 UTC m=+0.232974050 container died cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1) Oct 5 04:10:44 localhost podman[61785]: 2025-10-05 08:10:44.977554108 +0000 UTC m=+0.225059588 container init 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, build-date=2025-07-21T12:58:40, container_name=rsyslog, distribution-scope=public, version=17.1.9) Oct 5 04:10:44 localhost podman[61807]: 2025-10-05 08:10:44.987099855 +0000 UTC m=+0.213948917 container init f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, vcs-type=git, container_name=ceilometer_init_log, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:10:44 localhost podman[61807]: 2025-10-05 08:10:44.992667375 +0000 UTC m=+0.219516397 container start f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, vcs-type=git, container_name=ceilometer_init_log, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step3) Oct 5 04:10:44 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Oct 5 04:10:45 localhost systemd[1]: libpod-f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[61774]: 2025-10-05 08:10:45.032712786 +0000 UTC m=+0.289995470 container init 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, container_name=nova_virtlogd_wrapper, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 04:10:45 localhost podman[61774]: 2025-10-05 08:10:45.040371033 +0000 UTC m=+0.297653717 container start 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_virtlogd_wrapper, release=2, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:10:45 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost systemd[1]: libpod-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[61755]: 2025-10-05 08:10:45.071061612 +0000 UTC m=+0.343915027 container start 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:04:03, release=2, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:10:45 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da9a0dc7b40588672419e3ce10063e21 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 04:10:45 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:45 localhost podman[61891]: 2025-10-05 08:10:45.090082976 +0000 UTC m=+0.085280684 container died f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_init_log, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:10:45 localhost systemd[61874]: Queued start job for default target Main User Target. Oct 5 04:10:45 localhost systemd[61874]: Created slice User Application Slice. Oct 5 04:10:45 localhost systemd[61874]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:10:45 localhost systemd[61874]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:10:45 localhost systemd[61874]: Reached target Paths. Oct 5 04:10:45 localhost systemd[61874]: Reached target Timers. Oct 5 04:10:45 localhost podman[61851]: 2025-10-05 08:10:45.100440836 +0000 UTC m=+0.175287894 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, build-date=2025-07-21T13:04:03, vcs-type=git, config_id=tripleo_step3, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:10:45 localhost systemd[61874]: Starting D-Bus User Message Bus Socket... Oct 5 04:10:45 localhost systemd[61874]: Starting Create User's Volatile Files and Directories... Oct 5 04:10:45 localhost systemd[61874]: Listening on D-Bus User Message Bus Socket. Oct 5 04:10:45 localhost systemd[61874]: Reached target Sockets. Oct 5 04:10:45 localhost systemd[61874]: Finished Create User's Volatile Files and Directories. Oct 5 04:10:45 localhost systemd[61874]: Reached target Basic System. Oct 5 04:10:45 localhost systemd[61874]: Reached target Main User Target. Oct 5 04:10:45 localhost systemd[61874]: Startup finished in 123ms. Oct 5 04:10:45 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:10:45 localhost systemd[1]: Started Session c1 of User root. Oct 5 04:10:45 localhost systemd[1]: Started Session c2 of User root. Oct 5 04:10:45 localhost podman[61851]: 2025-10-05 08:10:45.183796015 +0000 UTC m=+0.258643083 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2) Oct 5 04:10:45 localhost systemd[1]: session-c2.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[61851]: unhealthy Oct 5 04:10:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:10:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Failed with result 'exit-code'. Oct 5 04:10:45 localhost systemd[1]: session-c1.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[61879]: 2025-10-05 08:10:45.201274918 +0000 UTC m=+0.214345828 container cleanup cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_statedir_owner, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 04:10:45 localhost systemd[1]: libpod-conmon-cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e.scope: Deactivated successfully. Oct 5 04:10:45 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Oct 5 04:10:45 localhost podman[61911]: 2025-10-05 08:10:45.271282378 +0000 UTC m=+0.258501071 container cleanup f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, container_name=ceilometer_init_log, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:10:45 localhost systemd[1]: libpod-conmon-f8dcba50d11832fed49d750ccf49da32db31271201772849f0672e50b2270a58.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[61785]: 2025-10-05 08:10:45.296252132 +0000 UTC m=+0.543757552 container start 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, build-date=2025-07-21T12:58:40, container_name=rsyslog, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, name=rhosp17/openstack-rsyslog, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:10:45 localhost podman[61944]: 2025-10-05 08:10:45.29804537 +0000 UTC m=+0.229697362 container died 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64, vcs-type=git, build-date=2025-07-21T12:58:40, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:10:45 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d724ad8b89331350c29ab6a1bdffd03b --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 04:10:45 localhost podman[61944]: 2025-10-05 08:10:45.318171374 +0000 UTC m=+0.249823366 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, distribution-scope=public, tcib_managed=true, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, container_name=rsyslog, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64) Oct 5 04:10:45 localhost systemd[1]: libpod-conmon-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[62093]: 2025-10-05 08:10:45.410196058 +0000 UTC m=+0.066544518 container create 8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1) Oct 5 04:10:45 localhost systemd[1]: Started libpod-conmon-8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf.scope. Oct 5 04:10:45 localhost systemd[1]: Started libcrun container. Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f372f5b48a6fb930879a487e82e32d29444a8f7e852ff75f52040cd8edeaeeaf/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f372f5b48a6fb930879a487e82e32d29444a8f7e852ff75f52040cd8edeaeeaf/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f372f5b48a6fb930879a487e82e32d29444a8f7e852ff75f52040cd8edeaeeaf/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f372f5b48a6fb930879a487e82e32d29444a8f7e852ff75f52040cd8edeaeeaf/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost podman[62093]: 2025-10-05 08:10:45.473176099 +0000 UTC m=+0.129524559 container init 8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:10:45 localhost podman[62093]: 2025-10-05 08:10:45.484904546 +0000 UTC m=+0.141253026 container start 8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:10:45 localhost podman[62093]: 2025-10-05 08:10:45.386831368 +0000 UTC m=+0.043179838 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd45fcc1519535c0c2218e221859c85fbae4b543a1a455ae6cd98a1c15d8714e-userdata-shm.mount: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: var-lib-containers-storage-overlay-ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3-merged.mount: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00-userdata-shm.mount: Deactivated successfully. Oct 5 04:10:45 localhost podman[62181]: 2025-10-05 08:10:45.863561139 +0000 UTC m=+0.094687098 container create 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_virtsecretd, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:10:45 localhost systemd[1]: Started libpod-conmon-022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2.scope. Oct 5 04:10:45 localhost podman[62181]: 2025-10-05 08:10:45.818845691 +0000 UTC m=+0.049971690 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost systemd[1]: Started libcrun container. Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost podman[62181]: 2025-10-05 08:10:45.938846301 +0000 UTC m=+0.169972240 container init 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, maintainer=OpenStack TripleO Team, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, container_name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Oct 5 04:10:45 localhost podman[62181]: 2025-10-05 08:10:45.94654841 +0000 UTC m=+0.177674349 container start 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., container_name=nova_virtsecretd) Oct 5 04:10:45 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:46 localhost systemd[1]: Started Session c3 of User root. Oct 5 04:10:46 localhost systemd[1]: session-c3.scope: Deactivated successfully. Oct 5 04:10:46 localhost podman[62317]: 2025-10-05 08:10:46.342326755 +0000 UTC m=+0.077144874 container create 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:10:46 localhost systemd[1]: Started libpod-conmon-5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.scope. Oct 5 04:10:46 localhost podman[62330]: 2025-10-05 08:10:46.383378174 +0000 UTC m=+0.081858412 container create 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:56:59) Oct 5 04:10:46 localhost systemd[1]: Started libcrun container. Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a9d8baf35bc0bfdd1af3c321e72fe98328bf9d350d48953a4ebb7cb925693bb/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a9d8baf35bc0bfdd1af3c321e72fe98328bf9d350d48953a4ebb7cb925693bb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:10:46 localhost podman[62317]: 2025-10-05 08:10:46.410115305 +0000 UTC m=+0.144933434 container init 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:10:46 localhost podman[62317]: 2025-10-05 08:10:46.310993299 +0000 UTC m=+0.045811428 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 04:10:46 localhost systemd[1]: Started libpod-conmon-21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d.scope. Oct 5 04:10:46 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:46 localhost systemd[1]: Started Session c4 of User root. Oct 5 04:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:10:46 localhost podman[62330]: 2025-10-05 08:10:46.341955855 +0000 UTC m=+0.040436103 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost systemd[1]: Started libcrun container. Oct 5 04:10:46 localhost podman[62317]: 2025-10-05 08:10:46.445726407 +0000 UTC m=+0.180544516 container start 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bfafc2f71ef1d8535e7a88ec76ac5234 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 04:10:46 localhost podman[62330]: 2025-10-05 08:10:46.454242437 +0000 UTC m=+0.152722675 container init 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, container_name=nova_virtnodedevd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Oct 5 04:10:46 localhost podman[62330]: 2025-10-05 08:10:46.46287016 +0000 UTC m=+0.161350398 container start 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, container_name=nova_virtnodedevd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, release=2, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:10:46 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:46 localhost systemd[1]: Started Session c5 of User root. Oct 5 04:10:46 localhost systemd[1]: session-c4.scope: Deactivated successfully. Oct 5 04:10:46 localhost kernel: Loading iSCSI transport class v2.0-870. Oct 5 04:10:46 localhost podman[62360]: 2025-10-05 08:10:46.582753496 +0000 UTC m=+0.129597220 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:10:46 localhost podman[62360]: 2025-10-05 08:10:46.587877375 +0000 UTC m=+0.134721079 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, vcs-type=git, container_name=iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 04:10:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:10:46 localhost systemd[1]: session-c5.scope: Deactivated successfully. Oct 5 04:10:46 localhost podman[62501]: 2025-10-05 08:10:46.894460942 +0000 UTC m=+0.086676411 container create 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20250721.1, container_name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, release=2, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, build-date=2025-07-21T14:56:59) Oct 5 04:10:46 localhost systemd[1]: Started libpod-conmon-2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8.scope. Oct 5 04:10:46 localhost podman[62501]: 2025-10-05 08:10:46.853526617 +0000 UTC m=+0.045742146 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost systemd[1]: Started libcrun container. Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost podman[62501]: 2025-10-05 08:10:46.969200861 +0000 UTC m=+0.161416330 container init 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, container_name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:10:46 localhost podman[62501]: 2025-10-05 08:10:46.980610179 +0000 UTC m=+0.172825618 container start 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, release=2, managed_by=tripleo_ansible) Oct 5 04:10:46 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:47 localhost systemd[1]: Started Session c6 of User root. Oct 5 04:10:47 localhost systemd[1]: session-c6.scope: Deactivated successfully. Oct 5 04:10:47 localhost podman[62604]: 2025-10-05 08:10:47.427610587 +0000 UTC m=+0.082145069 container create 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, build-date=2025-07-21T14:56:59, distribution-scope=public) Oct 5 04:10:47 localhost systemd[1]: Started libpod-conmon-0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56.scope. Oct 5 04:10:47 localhost systemd[1]: Started libcrun container. Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost podman[62604]: 2025-10-05 08:10:47.48514447 +0000 UTC m=+0.139678952 container init 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtqemud, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, release=2, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:10:47 localhost podman[62604]: 2025-10-05 08:10:47.390763902 +0000 UTC m=+0.045298444 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost podman[62604]: 2025-10-05 08:10:47.49477644 +0000 UTC m=+0.149310962 container start 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, architecture=x86_64, release=2, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.9, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:10:47 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:47 localhost systemd[1]: Started Session c7 of User root. Oct 5 04:10:47 localhost systemd[1]: session-c7.scope: Deactivated successfully. Oct 5 04:10:47 localhost podman[62711]: 2025-10-05 08:10:47.965808777 +0000 UTC m=+0.094464381 container create 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_virtproxyd, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_id=tripleo_step3, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt) Oct 5 04:10:48 localhost systemd[1]: Started libpod-conmon-8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6.scope. Oct 5 04:10:48 localhost podman[62711]: 2025-10-05 08:10:47.918006956 +0000 UTC m=+0.046662650 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:48 localhost systemd[1]: Started libcrun container. Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:48 localhost podman[62711]: 2025-10-05 08:10:48.034605034 +0000 UTC m=+0.163260658 container init 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=nova_virtproxyd, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 5 04:10:48 localhost podman[62711]: 2025-10-05 08:10:48.041211192 +0000 UTC m=+0.169866816 container start 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:59, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, vcs-type=git, container_name=nova_virtproxyd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}) Oct 5 04:10:48 localhost python3[61602]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:48 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:10:48 localhost systemd[1]: Started Session c8 of User root. Oct 5 04:10:48 localhost systemd[1]: session-c8.scope: Deactivated successfully. Oct 5 04:10:48 localhost python3[62795]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:48 localhost python3[62811]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[62827]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[62843]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[62859]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[62875]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[62891]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[62907]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[62923]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:51 localhost python3[62939]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:51 localhost python3[62955]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:51 localhost python3[62971]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:51 localhost python3[62987]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63003]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63019]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63035]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:53 localhost python3[63051]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:53 localhost python3[63067]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:53 localhost python3[63128]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:54 localhost python3[63157]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:54 localhost python3[63186]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:55 localhost python3[63215]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:55 localhost python3[63244]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:56 localhost python3[63273]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:57 localhost python3[63302]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:57 localhost python3[63331]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:58 localhost python3[63360]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651853.381952-101729-212223023140418/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:58 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 04:10:58 localhost systemd[61874]: Activating special unit Exit the Session... Oct 5 04:10:58 localhost systemd[61874]: Stopped target Main User Target. Oct 5 04:10:58 localhost systemd[61874]: Stopped target Basic System. Oct 5 04:10:58 localhost systemd[61874]: Stopped target Paths. Oct 5 04:10:58 localhost systemd[61874]: Stopped target Sockets. Oct 5 04:10:58 localhost systemd[61874]: Stopped target Timers. Oct 5 04:10:58 localhost systemd[61874]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:10:58 localhost systemd[61874]: Closed D-Bus User Message Bus Socket. Oct 5 04:10:58 localhost systemd[61874]: Stopped Create User's Volatile Files and Directories. Oct 5 04:10:58 localhost systemd[61874]: Removed slice User Application Slice. Oct 5 04:10:58 localhost systemd[61874]: Reached target Shutdown. Oct 5 04:10:58 localhost systemd[61874]: Finished Exit the Session. Oct 5 04:10:58 localhost systemd[61874]: Reached target Exit the Session. Oct 5 04:10:58 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 04:10:58 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 04:10:58 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 04:10:58 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 04:10:58 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 04:10:58 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 04:10:58 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 04:10:58 localhost python3[63376]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 04:10:58 localhost systemd[1]: Reloading. Oct 5 04:10:58 localhost systemd-rc-local-generator[63399]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:58 localhost systemd-sysv-generator[63403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:59 localhost python3[63428]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:00 localhost systemd[1]: Reloading. Oct 5 04:11:00 localhost systemd-rc-local-generator[63455]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:00 localhost systemd-sysv-generator[63459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:00 localhost systemd[1]: Starting collectd container... Oct 5 04:11:00 localhost systemd[1]: Started collectd container. Oct 5 04:11:01 localhost python3[63497]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:11:01 localhost systemd[1]: Reloading. Oct 5 04:11:01 localhost podman[63499]: 2025-10-05 08:11:01.66497501 +0000 UTC m=+0.117926926 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, distribution-scope=public, container_name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:11:01 localhost systemd-rc-local-generator[63540]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:01 localhost systemd-sysv-generator[63543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:01 localhost podman[63499]: 2025-10-05 08:11:01.832288937 +0000 UTC m=+0.285240873 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:11:01 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:11:01 localhost systemd[1]: Starting iscsid container... Oct 5 04:11:01 localhost systemd[1]: Started iscsid container. Oct 5 04:11:02 localhost python3[63593]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:02 localhost systemd[1]: Reloading. Oct 5 04:11:03 localhost systemd-sysv-generator[63620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:03 localhost systemd-rc-local-generator[63616]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:03 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Oct 5 04:11:03 localhost systemd[1]: Started nova_virtlogd_wrapper container. Oct 5 04:11:03 localhost python3[63661]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:04 localhost systemd[1]: Reloading. Oct 5 04:11:05 localhost systemd-sysv-generator[63693]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:05 localhost systemd-rc-local-generator[63686]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:05 localhost systemd[1]: Starting nova_virtnodedevd container... Oct 5 04:11:05 localhost tripleo-start-podman-container[63701]: Creating additional drop-in dependency for "nova_virtnodedevd" (21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d) Oct 5 04:11:05 localhost systemd[1]: Reloading. Oct 5 04:11:05 localhost systemd-rc-local-generator[63754]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:05 localhost systemd-sysv-generator[63758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:05 localhost systemd[1]: Started nova_virtnodedevd container. Oct 5 04:11:06 localhost python3[63785]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:06 localhost sshd[63788]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:11:07 localhost systemd[1]: Reloading. Oct 5 04:11:07 localhost systemd-rc-local-generator[63813]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:07 localhost systemd-sysv-generator[63819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:07 localhost systemd[1]: Starting nova_virtproxyd container... Oct 5 04:11:07 localhost tripleo-start-podman-container[63827]: Creating additional drop-in dependency for "nova_virtproxyd" (8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6) Oct 5 04:11:08 localhost systemd[1]: Reloading. Oct 5 04:11:08 localhost systemd-rc-local-generator[63883]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:08 localhost systemd-sysv-generator[63888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:08 localhost systemd[1]: Started nova_virtproxyd container. Oct 5 04:11:08 localhost python3[63910]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:08 localhost systemd[1]: Reloading. Oct 5 04:11:09 localhost systemd-sysv-generator[63943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:09 localhost systemd-rc-local-generator[63936]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:09 localhost systemd[1]: Starting nova_virtqemud container... Oct 5 04:11:09 localhost tripleo-start-podman-container[63950]: Creating additional drop-in dependency for "nova_virtqemud" (0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56) Oct 5 04:11:09 localhost systemd[1]: Reloading. Oct 5 04:11:09 localhost systemd-sysv-generator[64010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:09 localhost systemd-rc-local-generator[64007]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:09 localhost systemd[1]: Started nova_virtqemud container. Oct 5 04:11:10 localhost python3[64035]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:10 localhost systemd[1]: Reloading. Oct 5 04:11:10 localhost systemd-sysv-generator[64063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:10 localhost systemd-rc-local-generator[64059]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:10 localhost systemd[1]: Starting nova_virtsecretd container... Oct 5 04:11:11 localhost tripleo-start-podman-container[64075]: Creating additional drop-in dependency for "nova_virtsecretd" (022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2) Oct 5 04:11:11 localhost systemd[1]: Reloading. Oct 5 04:11:11 localhost systemd-rc-local-generator[64130]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:11 localhost systemd-sysv-generator[64133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:11 localhost systemd[1]: Started nova_virtsecretd container. Oct 5 04:11:12 localhost python3[64160]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:12 localhost systemd[1]: Reloading. Oct 5 04:11:12 localhost systemd-sysv-generator[64191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:12 localhost systemd-rc-local-generator[64187]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:12 localhost systemd[1]: Starting nova_virtstoraged container... Oct 5 04:11:12 localhost tripleo-start-podman-container[64201]: Creating additional drop-in dependency for "nova_virtstoraged" (2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8) Oct 5 04:11:12 localhost systemd[1]: Reloading. Oct 5 04:11:12 localhost systemd-sysv-generator[64260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:12 localhost systemd-rc-local-generator[64256]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:12 localhost systemd[1]: Started nova_virtstoraged container. Oct 5 04:11:13 localhost python3[64284]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:13 localhost systemd[1]: Reloading. Oct 5 04:11:13 localhost systemd-sysv-generator[64312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:13 localhost systemd-rc-local-generator[64308]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:13 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:14 localhost systemd[1]: Started libcrun container. Oct 5 04:11:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:14 localhost podman[64323]: 2025-10-05 08:11:14.051155314 +0000 UTC m=+0.121146891 container init 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, architecture=x86_64, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:11:14 localhost podman[64323]: 2025-10-05 08:11:14.062878251 +0000 UTC m=+0.132869818 container start 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, version=17.1.9, container_name=rsyslog, release=1, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:40, summary=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:11:14 localhost podman[64323]: rsyslog Oct 5 04:11:14 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:14 localhost systemd[1]: libpod-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:11:14 localhost podman[64358]: 2025-10-05 08:11:14.249744636 +0000 UTC m=+0.063429664 container died 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, config_id=tripleo_step3, container_name=rsyslog, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog) Oct 5 04:11:14 localhost podman[64358]: 2025-10-05 08:11:14.275885592 +0000 UTC m=+0.089570570 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.9, com.redhat.component=openstack-rsyslog-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T12:58:40, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:11:14 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:14 localhost podman[64371]: 2025-10-05 08:11:14.369090629 +0000 UTC m=+0.063079485 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:11:14 localhost podman[64371]: rsyslog Oct 5 04:11:14 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:14 localhost python3[64397]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:11:14 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Oct 5 04:11:14 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:14 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:14 localhost systemd[1]: Started libcrun container. Oct 5 04:11:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:14 localhost podman[64398]: 2025-10-05 08:11:14.713067915 +0000 UTC m=+0.119912709 container init 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:40, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, name=rhosp17/openstack-rsyslog, release=1, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:11:14 localhost podman[64398]: 2025-10-05 08:11:14.722188042 +0000 UTC m=+0.129032836 container start 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, version=17.1.9, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, container_name=rsyslog, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:11:14 localhost podman[64398]: rsyslog Oct 5 04:11:14 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:14 localhost systemd[1]: libpod-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:11:14 localhost podman[64420]: 2025-10-05 08:11:14.898121011 +0000 UTC m=+0.052983791 container died 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, container_name=rsyslog, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-rsyslog-container, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167) Oct 5 04:11:14 localhost podman[64420]: 2025-10-05 08:11:14.922410257 +0000 UTC m=+0.077272957 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T12:58:40, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, container_name=rsyslog) Oct 5 04:11:14 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:14 localhost systemd[1]: var-lib-containers-storage-overlay-ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3-merged.mount: Deactivated successfully. Oct 5 04:11:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00-userdata-shm.mount: Deactivated successfully. Oct 5 04:11:15 localhost podman[64456]: 2025-10-05 08:11:15.00325056 +0000 UTC m=+0.060022592 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:11:15 localhost podman[64456]: rsyslog Oct 5 04:11:15 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:15 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Oct 5 04:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:11:15 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:15 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:15 localhost podman[64514]: 2025-10-05 08:11:15.455203522 +0000 UTC m=+0.102913160 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, version=17.1.9, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:11:15 localhost podman[64514]: 2025-10-05 08:11:15.520240068 +0000 UTC m=+0.167949696 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., version=17.1.9, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:11:15 localhost systemd[1]: Started libcrun container. Oct 5 04:11:15 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:11:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:15 localhost podman[64517]: 2025-10-05 08:11:15.55031041 +0000 UTC m=+0.194819271 container init 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, container_name=rsyslog, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:40, distribution-scope=public, name=rhosp17/openstack-rsyslog, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Oct 5 04:11:15 localhost podman[64517]: 2025-10-05 08:11:15.559742835 +0000 UTC m=+0.204251696 container start 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, version=17.1.9, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=rsyslog, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container) Oct 5 04:11:15 localhost podman[64517]: rsyslog Oct 5 04:11:15 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:15 localhost systemd[1]: libpod-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:11:15 localhost podman[64577]: 2025-10-05 08:11:15.717014741 +0000 UTC m=+0.042155229 container died 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:40, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog) Oct 5 04:11:15 localhost podman[64577]: 2025-10-05 08:11:15.736301592 +0000 UTC m=+0.061441990 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, config_id=tripleo_step3, build-date=2025-07-21T12:58:40, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, version=17.1.9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=rsyslog, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1) Oct 5 04:11:15 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:15 localhost podman[64603]: 2025-10-05 08:11:15.832270363 +0000 UTC m=+0.059153718 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, release=1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-07-21T12:58:40, version=17.1.9, container_name=rsyslog, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:11:15 localhost podman[64603]: rsyslog Oct 5 04:11:15 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:15 localhost systemd[1]: var-lib-containers-storage-overlay-ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3-merged.mount: Deactivated successfully. Oct 5 04:11:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00-userdata-shm.mount: Deactivated successfully. Oct 5 04:11:15 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Oct 5 04:11:15 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:15 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:16 localhost systemd[1]: tmp-crun.76A4Pd.mount: Deactivated successfully. Oct 5 04:11:16 localhost python3[64629]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005471150 step=3 update_config_hash_only=False Oct 5 04:11:16 localhost systemd[1]: Started libcrun container. Oct 5 04:11:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:16 localhost podman[64630]: 2025-10-05 08:11:16.152362925 +0000 UTC m=+0.140744011 container init 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, version=17.1.9, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, build-date=2025-07-21T12:58:40, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12) Oct 5 04:11:16 localhost podman[64630]: 2025-10-05 08:11:16.158833359 +0000 UTC m=+0.147214435 container start 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, version=17.1.9, container_name=rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, build-date=2025-07-21T12:58:40, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog) Oct 5 04:11:16 localhost podman[64630]: rsyslog Oct 5 04:11:16 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:16 localhost systemd[1]: libpod-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:11:16 localhost podman[64652]: 2025-10-05 08:11:16.318805648 +0000 UTC m=+0.057762580 container died 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-07-21T12:58:40, name=rhosp17/openstack-rsyslog, architecture=x86_64, version=17.1.9, config_id=tripleo_step3, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:11:16 localhost podman[64652]: 2025-10-05 08:11:16.343529486 +0000 UTC m=+0.082486388 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc.) Oct 5 04:11:16 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:16 localhost podman[64664]: 2025-10-05 08:11:16.434578155 +0000 UTC m=+0.064704058 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, release=1, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, build-date=2025-07-21T12:58:40, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 04:11:16 localhost podman[64664]: rsyslog Oct 5 04:11:16 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:16 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Oct 5 04:11:16 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:16 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:16 localhost python3[64691]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:11:16 localhost systemd[1]: Started libcrun container. Oct 5 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:11:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:16 localhost podman[64692]: 2025-10-05 08:11:16.718955732 +0000 UTC m=+0.127101552 container init 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, container_name=rsyslog, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, com.redhat.component=openstack-rsyslog-container) Oct 5 04:11:16 localhost podman[64692]: 2025-10-05 08:11:16.730486773 +0000 UTC m=+0.138632603 container start 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1) Oct 5 04:11:16 localhost podman[64692]: rsyslog Oct 5 04:11:16 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:16 localhost podman[64710]: 2025-10-05 08:11:16.782784305 +0000 UTC m=+0.068762337 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, version=17.1.9, container_name=iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.) Oct 5 04:11:16 localhost podman[64710]: 2025-10-05 08:11:16.793875015 +0000 UTC m=+0.079853067 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:11:16 localhost systemd[1]: libpod-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00.scope: Deactivated successfully. Oct 5 04:11:16 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:11:16 localhost podman[64735]: 2025-10-05 08:11:16.85221883 +0000 UTC m=+0.039213339 container died 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, distribution-scope=public, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:11:16 localhost podman[64735]: 2025-10-05 08:11:16.877317338 +0000 UTC m=+0.064311817 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, version=17.1.9, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, build-date=2025-07-21T12:58:40, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:11:16 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:16 localhost podman[64765]: 2025-10-05 08:11:16.960435942 +0000 UTC m=+0.054525813 container cleanup 490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, com.redhat.component=openstack-rsyslog-container, vcs-type=git, batch=17.1_20250721.1, container_name=rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd724ad8b89331350c29ab6a1bdffd03b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, config_id=tripleo_step3, architecture=x86_64) Oct 5 04:11:16 localhost podman[64765]: rsyslog Oct 5 04:11:16 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:16 localhost systemd[1]: var-lib-containers-storage-overlay-ee86a22b7593d6d656efa64101c8ea90c2417f449a97abe444ae18055a10e5a3-merged.mount: Deactivated successfully. Oct 5 04:11:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-490970fecc5ba0367545dd1f624b3800e40ea364eff7306637e9f46f54373c00-userdata-shm.mount: Deactivated successfully. Oct 5 04:11:17 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Oct 5 04:11:17 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:17 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Oct 5 04:11:17 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:17 localhost systemd[1]: Failed to start rsyslog container. Oct 5 04:11:17 localhost python3[64766]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:11:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:11:32 localhost podman[64779]: 2025-10-05 08:11:32.682561952 +0000 UTC m=+0.086019524 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, release=1, architecture=x86_64) Oct 5 04:11:32 localhost podman[64779]: 2025-10-05 08:11:32.884867344 +0000 UTC m=+0.288324916 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:11:32 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:11:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:11:45 localhost systemd[1]: tmp-crun.wMGcwG.mount: Deactivated successfully. Oct 5 04:11:45 localhost podman[64885]: 2025-10-05 08:11:45.686609799 +0000 UTC m=+0.087910885 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, version=17.1.9, container_name=collectd, managed_by=tripleo_ansible, release=2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public) Oct 5 04:11:45 localhost podman[64885]: 2025-10-05 08:11:45.726082914 +0000 UTC m=+0.127383990 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, tcib_managed=true, version=17.1.9, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Oct 5 04:11:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:11:47 localhost podman[64905]: 2025-10-05 08:11:47.687661724 +0000 UTC m=+0.089580380 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:11:47 localhost podman[64905]: 2025-10-05 08:11:47.722426913 +0000 UTC m=+0.124345609 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:11:47 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:12:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:12:03 localhost systemd[1]: tmp-crun.CiMdbd.mount: Deactivated successfully. Oct 5 04:12:03 localhost podman[64924]: 2025-10-05 08:12:03.671253396 +0000 UTC m=+0.082888059 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1) Oct 5 04:12:03 localhost podman[64924]: 2025-10-05 08:12:03.857756141 +0000 UTC m=+0.269390804 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, architecture=x86_64, version=17.1.9, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:12:03 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:12:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:12:16 localhost podman[64953]: 2025-10-05 08:12:16.674070609 +0000 UTC m=+0.080277587 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:04:03) Oct 5 04:12:16 localhost podman[64953]: 2025-10-05 08:12:16.685708223 +0000 UTC m=+0.091915211 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3) Oct 5 04:12:16 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:12:18 localhost podman[64974]: 2025-10-05 08:12:18.669420972 +0000 UTC m=+0.079551669 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15) Oct 5 04:12:18 localhost podman[64974]: 2025-10-05 08:12:18.684731495 +0000 UTC m=+0.094862182 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9) Oct 5 04:12:18 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:12:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:12:34 localhost podman[65007]: 2025-10-05 08:12:34.690452304 +0000 UTC m=+0.086139947 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, container_name=metrics_qdr, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64) Oct 5 04:12:34 localhost podman[65007]: 2025-10-05 08:12:34.882741344 +0000 UTC m=+0.278428997 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-type=git) Oct 5 04:12:34 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:12:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:12:47 localhost podman[65100]: 2025-10-05 08:12:47.678266151 +0000 UTC m=+0.089232371 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:12:47 localhost podman[65100]: 2025-10-05 08:12:47.695491676 +0000 UTC m=+0.106457906 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, version=17.1.9) Oct 5 04:12:47 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:12:49 localhost systemd[1]: tmp-crun.kDyC9Z.mount: Deactivated successfully. Oct 5 04:12:49 localhost podman[65119]: 2025-10-05 08:12:49.668236058 +0000 UTC m=+0.079998860 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, io.buildah.version=1.33.12) Oct 5 04:12:49 localhost podman[65119]: 2025-10-05 08:12:49.702335309 +0000 UTC m=+0.114098101 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, version=17.1.9, distribution-scope=public, release=1, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, architecture=x86_64, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:12:49 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:13:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:13:05 localhost podman[65137]: 2025-10-05 08:13:05.670491523 +0000 UTC m=+0.082426435 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:13:05 localhost podman[65137]: 2025-10-05 08:13:05.919194428 +0000 UTC m=+0.331129350 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.9, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd) Oct 5 04:13:05 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:13:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:13:18 localhost systemd[1]: tmp-crun.u3BXNS.mount: Deactivated successfully. Oct 5 04:13:18 localhost podman[65165]: 2025-10-05 08:13:18.682632751 +0000 UTC m=+0.092714464 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:13:18 localhost podman[65165]: 2025-10-05 08:13:18.717255107 +0000 UTC m=+0.127336830 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.openshift.expose-services=, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:13:18 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:13:20 localhost podman[65186]: 2025-10-05 08:13:20.672754023 +0000 UTC m=+0.082942261 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Oct 5 04:13:20 localhost podman[65186]: 2025-10-05 08:13:20.707217213 +0000 UTC m=+0.117405411 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Oct 5 04:13:20 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:13:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:13:36 localhost systemd[1]: tmp-crun.PRPqrb.mount: Deactivated successfully. Oct 5 04:13:36 localhost podman[65220]: 2025-10-05 08:13:36.328897502 +0000 UTC m=+0.099881007 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, release=1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:13:36 localhost podman[65220]: 2025-10-05 08:13:36.531695598 +0000 UTC m=+0.302679053 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step1) Oct 5 04:13:36 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:13:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:13:49 localhost podman[65311]: 2025-10-05 08:13:49.671416744 +0000 UTC m=+0.082677485 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:04:03, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible) Oct 5 04:13:49 localhost podman[65311]: 2025-10-05 08:13:49.684843079 +0000 UTC m=+0.096103820 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, container_name=collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, release=2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:13:49 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:13:51 localhost podman[65331]: 2025-10-05 08:13:51.670614559 +0000 UTC m=+0.080163388 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, release=1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:13:51 localhost podman[65331]: 2025-10-05 08:13:51.683157749 +0000 UTC m=+0.092706578 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.33.12, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:13:51 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:14:06 localhost podman[65352]: 2025-10-05 08:14:06.675756641 +0000 UTC m=+0.086283214 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.33.12, distribution-scope=public, release=1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59) Oct 5 04:14:06 localhost podman[65352]: 2025-10-05 08:14:06.870994042 +0000 UTC m=+0.281520645 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, tcib_managed=true, version=17.1.9) Oct 5 04:14:06 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:14:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:14:20 localhost systemd[1]: tmp-crun.dGn3Ik.mount: Deactivated successfully. Oct 5 04:14:20 localhost podman[65382]: 2025-10-05 08:14:20.698791136 +0000 UTC m=+0.104015995 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:14:20 localhost podman[65382]: 2025-10-05 08:14:20.711031048 +0000 UTC m=+0.116255967 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:14:20 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:14:22 localhost podman[65402]: 2025-10-05 08:14:22.683772434 +0000 UTC m=+0.093469960 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, container_name=iscsid, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9) Oct 5 04:14:22 localhost podman[65402]: 2025-10-05 08:14:22.720949214 +0000 UTC m=+0.130646700 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_step3, container_name=iscsid, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=) Oct 5 04:14:22 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:14:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:14:37 localhost podman[65421]: 2025-10-05 08:14:37.67391198 +0000 UTC m=+0.081021480 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Oct 5 04:14:37 localhost podman[65421]: 2025-10-05 08:14:37.894044357 +0000 UTC m=+0.301153817 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:14:37 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:14:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:14:51 localhost podman[65577]: 2025-10-05 08:14:51.675680417 +0000 UTC m=+0.087184838 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, config_id=tripleo_step3, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:14:51 localhost podman[65577]: 2025-10-05 08:14:51.716817163 +0000 UTC m=+0.128321534 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=2, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:14:51 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:14:53 localhost podman[65596]: 2025-10-05 08:14:53.671612881 +0000 UTC m=+0.082360647 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, release=1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:14:53 localhost podman[65596]: 2025-10-05 08:14:53.680235775 +0000 UTC m=+0.090983551 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:14:53 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:15:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:15:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4605 writes, 20K keys, 4605 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4605 writes, 482 syncs, 9.55 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 401 writes, 827 keys, 401 commit groups, 1.0 writes per commit group, ingest: 0.66 MB, 0.00 MB/s#012Interval WAL: 401 writes, 197 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:15:02 localhost python3[65662]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:03 localhost python3[65707]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652102.3929565-108619-146899140054106/source _original_basename=tmppfn7hf2d follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:04 localhost python3[65769]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:04 localhost python3[65812]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652103.9075305-108719-8001391636917/source _original_basename=tmpkgvb41xo follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:15:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5059 writes, 22K keys, 5059 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5059 writes, 580 syncs, 8.72 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 322 writes, 650 keys, 322 commit groups, 1.0 writes per commit group, ingest: 0.51 MB, 0.00 MB/s#012Interval WAL: 322 writes, 161 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:15:05 localhost python3[65874]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:05 localhost python3[65917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652104.847766-108841-228599476045339/source _original_basename=tmpskq3l4ld follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:06 localhost python3[65979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:06 localhost python3[66022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652105.7871518-108899-159319657604752/source _original_basename=tmpw8pqfeqm follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:06 localhost python3[66052]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 04:15:06 localhost systemd[1]: Reloading. Oct 5 04:15:07 localhost systemd-rc-local-generator[66073]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:07 localhost systemd-sysv-generator[66077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:07 localhost systemd[1]: Reloading. Oct 5 04:15:07 localhost systemd-rc-local-generator[66110]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:07 localhost systemd-sysv-generator[66115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:08 localhost python3[66141]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:15:08 localhost systemd[1]: Reloading. Oct 5 04:15:08 localhost podman[66143]: 2025-10-05 08:15:08.249136715 +0000 UTC m=+0.086353866 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, container_name=metrics_qdr, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, config_id=tripleo_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true) Oct 5 04:15:08 localhost systemd-sysv-generator[66181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:08 localhost systemd-rc-local-generator[66177]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:08 localhost podman[66143]: 2025-10-05 08:15:08.466839236 +0000 UTC m=+0.304056377 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, vcs-type=git, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9) Oct 5 04:15:08 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:15:08 localhost systemd[1]: Reloading. Oct 5 04:15:08 localhost systemd-sysv-generator[66232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:08 localhost systemd-rc-local-generator[66229]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:08 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Oct 5 04:15:09 localhost python3[66258]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 04:15:09 localhost systemd[1]: Reloading. Oct 5 04:15:09 localhost systemd-rc-local-generator[66281]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:09 localhost systemd-sysv-generator[66285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:10 localhost python3[66342]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:10 localhost python3[66385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652109.682482-108991-55559900005722/source _original_basename=tmp6o574_jy follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:10 localhost python3[66415]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:11 localhost systemd[1]: Reloading. Oct 5 04:15:11 localhost systemd-sysv-generator[66446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:11 localhost systemd-rc-local-generator[66441]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:11 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Oct 5 04:15:11 localhost python3[66470]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:13 localhost ansible-async_wrapper.py[66642]: Invoked with 342920936920 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652112.8784907-109077-4284549476474/AnsiballZ_command.py _ Oct 5 04:15:13 localhost ansible-async_wrapper.py[66645]: Starting module and watcher Oct 5 04:15:13 localhost ansible-async_wrapper.py[66645]: Start watching 66646 (3600) Oct 5 04:15:13 localhost ansible-async_wrapper.py[66646]: Start module (66646) Oct 5 04:15:13 localhost ansible-async_wrapper.py[66642]: Return async_wrapper task started. Oct 5 04:15:13 localhost python3[66666]: ansible-ansible.legacy.async_status Invoked with jid=342920936920.66642 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:15:16 localhost puppet-user[66665]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:15:16 localhost puppet-user[66665]: (file: /etc/puppet/hiera.yaml) Oct 5 04:15:16 localhost puppet-user[66665]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:15:16 localhost puppet-user[66665]: (file & line not available) Oct 5 04:15:17 localhost puppet-user[66665]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:15:17 localhost puppet-user[66665]: (file & line not available) Oct 5 04:15:17 localhost puppet-user[66665]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:15:17 localhost puppet-user[66665]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[66665]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[66665]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[66665]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[66665]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[66665]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[66665]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[66665]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[66665]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[66665]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[66665]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[66665]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[66665]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[66665]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[66665]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[66665]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[66665]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[66665]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:15:17 localhost puppet-user[66665]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.29 seconds Oct 5 04:15:18 localhost ansible-async_wrapper.py[66645]: 66646 still running (3600) Oct 5 04:15:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:15:22 localhost podman[66799]: 2025-10-05 08:15:22.695648687 +0000 UTC m=+0.095967737 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, release=2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:15:22 localhost podman[66799]: 2025-10-05 08:15:22.709585456 +0000 UTC m=+0.109904476 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:15:22 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:15:23 localhost ansible-async_wrapper.py[66645]: 66646 still running (3595) Oct 5 04:15:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:15:23 localhost systemd[1]: tmp-crun.Sby9qD.mount: Deactivated successfully. Oct 5 04:15:24 localhost podman[66886]: 2025-10-05 08:15:24.001419233 +0000 UTC m=+0.094544719 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1, tcib_managed=true, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Oct 5 04:15:24 localhost podman[66886]: 2025-10-05 08:15:24.038016226 +0000 UTC m=+0.131141712 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:15:24 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:15:24 localhost python3[66885]: ansible-ansible.legacy.async_status Invoked with jid=342920936920.66642 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:15:25 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 04:15:25 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 04:15:25 localhost systemd[1]: Reloading. Oct 5 04:15:25 localhost systemd-rc-local-generator[66975]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:25 localhost systemd-sysv-generator[66984]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:25 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 04:15:26 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 04:15:26 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 04:15:26 localhost systemd[1]: man-db-cache-update.service: Consumed 1.305s CPU time. Oct 5 04:15:26 localhost systemd[1]: run-re62b0a09a2a44de183547727fb655183.service: Deactivated successfully. Oct 5 04:15:27 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Oct 5 04:15:27 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}8fbb2cda190edb17e00f9641b255187f117e67f74024d6a00ea4a47042336fab' Oct 5 04:15:27 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Oct 5 04:15:27 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Oct 5 04:15:27 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Oct 5 04:15:27 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Oct 5 04:15:28 localhost ansible-async_wrapper.py[66645]: 66646 still running (3590) Oct 5 04:15:32 localhost puppet-user[66665]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Oct 5 04:15:32 localhost systemd[1]: Reloading. Oct 5 04:15:32 localhost systemd-rc-local-generator[68035]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:32 localhost systemd-sysv-generator[68038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:32 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Oct 5 04:15:32 localhost snmpd[68045]: Can't find directory of RPM packages Oct 5 04:15:32 localhost snmpd[68045]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Oct 5 04:15:32 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Oct 5 04:15:32 localhost systemd[1]: Reloading. Oct 5 04:15:33 localhost systemd-sysv-generator[68078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:33 localhost systemd-rc-local-generator[68074]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:33 localhost systemd[1]: Reloading. Oct 5 04:15:33 localhost systemd-sysv-generator[68110]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:33 localhost systemd-rc-local-generator[68106]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:33 localhost ansible-async_wrapper.py[66645]: 66646 still running (3585) Oct 5 04:15:33 localhost puppet-user[66665]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Oct 5 04:15:33 localhost puppet-user[66665]: Notice: Applied catalog in 16.29 seconds Oct 5 04:15:33 localhost puppet-user[66665]: Application: Oct 5 04:15:33 localhost puppet-user[66665]: Initial environment: production Oct 5 04:15:33 localhost puppet-user[66665]: Converged environment: production Oct 5 04:15:33 localhost puppet-user[66665]: Run mode: user Oct 5 04:15:33 localhost puppet-user[66665]: Changes: Oct 5 04:15:33 localhost puppet-user[66665]: Total: 8 Oct 5 04:15:33 localhost puppet-user[66665]: Events: Oct 5 04:15:33 localhost puppet-user[66665]: Success: 8 Oct 5 04:15:33 localhost puppet-user[66665]: Total: 8 Oct 5 04:15:33 localhost puppet-user[66665]: Resources: Oct 5 04:15:33 localhost puppet-user[66665]: Restarted: 1 Oct 5 04:15:33 localhost puppet-user[66665]: Changed: 8 Oct 5 04:15:33 localhost puppet-user[66665]: Out of sync: 8 Oct 5 04:15:33 localhost puppet-user[66665]: Total: 19 Oct 5 04:15:33 localhost puppet-user[66665]: Time: Oct 5 04:15:33 localhost puppet-user[66665]: Filebucket: 0.00 Oct 5 04:15:33 localhost puppet-user[66665]: Schedule: 0.00 Oct 5 04:15:33 localhost puppet-user[66665]: Augeas: 0.01 Oct 5 04:15:33 localhost puppet-user[66665]: File: 0.10 Oct 5 04:15:33 localhost puppet-user[66665]: Config retrieval: 0.36 Oct 5 04:15:33 localhost puppet-user[66665]: Service: 1.18 Oct 5 04:15:33 localhost puppet-user[66665]: Transaction evaluation: 16.28 Oct 5 04:15:33 localhost puppet-user[66665]: Catalog application: 16.29 Oct 5 04:15:33 localhost puppet-user[66665]: Last run: 1759652133 Oct 5 04:15:33 localhost puppet-user[66665]: Exec: 5.08 Oct 5 04:15:33 localhost puppet-user[66665]: Package: 9.69 Oct 5 04:15:33 localhost puppet-user[66665]: Total: 16.29 Oct 5 04:15:33 localhost puppet-user[66665]: Version: Oct 5 04:15:33 localhost puppet-user[66665]: Config: 1759652116 Oct 5 04:15:33 localhost puppet-user[66665]: Puppet: 7.10.0 Oct 5 04:15:33 localhost ansible-async_wrapper.py[66646]: Module complete (66646) Oct 5 04:15:34 localhost python3[68135]: ansible-ansible.legacy.async_status Invoked with jid=342920936920.66642 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:15:35 localhost python3[68151]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:15:35 localhost python3[68167]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:35 localhost python3[68217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:36 localhost python3[68235]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpoyphl0ac recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:15:36 localhost python3[68265]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:37 localhost python3[68368]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:15:38 localhost ansible-async_wrapper.py[66645]: Done in kid B. Oct 5 04:15:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:15:38 localhost systemd[1]: tmp-crun.iHzJG5.mount: Deactivated successfully. Oct 5 04:15:38 localhost podman[68387]: 2025-10-05 08:15:38.700717659 +0000 UTC m=+0.106427461 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, batch=17.1_20250721.1, config_id=tripleo_step1, version=17.1.9) Oct 5 04:15:38 localhost python3[68388]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:38 localhost podman[68387]: 2025-10-05 08:15:38.904843312 +0000 UTC m=+0.310553114 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.expose-services=, config_id=tripleo_step1) Oct 5 04:15:38 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:15:39 localhost python3[68446]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:40 localhost python3[68496]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:40 localhost python3[68514]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:41 localhost python3[68576]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:41 localhost python3[68594]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:41 localhost python3[68656]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:42 localhost python3[68674]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:42 localhost python3[68736]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:43 localhost python3[68754]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:43 localhost python3[68784]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:43 localhost systemd[1]: Reloading. Oct 5 04:15:43 localhost systemd-sysv-generator[68811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:43 localhost systemd-rc-local-generator[68806]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:44 localhost python3[68870]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:44 localhost python3[68888]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:45 localhost python3[68950]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:45 localhost python3[68968]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:46 localhost python3[68998]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:46 localhost systemd[1]: Reloading. Oct 5 04:15:46 localhost systemd-sysv-generator[69026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:46 localhost systemd-rc-local-generator[69020]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:46 localhost systemd[1]: Starting Create netns directory... Oct 5 04:15:46 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:15:46 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:15:46 localhost systemd[1]: Finished Create netns directory. Oct 5 04:15:47 localhost python3[69085]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:15:48 localhost podman[69280]: Oct 5 04:15:48 localhost podman[69280]: 2025-10-05 08:15:48.920862264 +0000 UTC m=+0.083751905 container create 08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_montalcini, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.openshift.tags=rhceph ceph, release=553) Oct 5 04:15:48 localhost systemd[1]: Started libpod-conmon-08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6.scope. Oct 5 04:15:48 localhost systemd[1]: Started libcrun container. Oct 5 04:15:48 localhost podman[69280]: 2025-10-05 08:15:48.889341129 +0000 UTC m=+0.052230800 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:15:48 localhost podman[69280]: 2025-10-05 08:15:48.996482898 +0000 UTC m=+0.159372539 container init 08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_montalcini, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 04:15:49 localhost systemd[1]: tmp-crun.ht3hvy.mount: Deactivated successfully. Oct 5 04:15:49 localhost podman[69280]: 2025-10-05 08:15:49.010661992 +0000 UTC m=+0.173551643 container start 08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_montalcini, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 04:15:49 localhost podman[69280]: 2025-10-05 08:15:49.010973351 +0000 UTC m=+0.173862992 container attach 08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_montalcini, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True) Oct 5 04:15:49 localhost crazy_montalcini[69295]: 167 167 Oct 5 04:15:49 localhost systemd[1]: libpod-08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6.scope: Deactivated successfully. Oct 5 04:15:49 localhost podman[69280]: 2025-10-05 08:15:49.014362624 +0000 UTC m=+0.177252265 container died 08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_montalcini, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:15:49 localhost podman[69300]: 2025-10-05 08:15:49.119021615 +0000 UTC m=+0.090954031 container remove 08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_montalcini, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, ceph=True) Oct 5 04:15:49 localhost systemd[1]: libpod-conmon-08725debfba8a513521b3a215f9f546d516cbf35b3ce28c314c52f2e5b0a24b6.scope: Deactivated successfully. Oct 5 04:15:49 localhost podman[69338]: Oct 5 04:15:49 localhost podman[69338]: 2025-10-05 08:15:49.344377704 +0000 UTC m=+0.080910497 container create 032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_carver, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55) Oct 5 04:15:49 localhost podman[69338]: 2025-10-05 08:15:49.318639765 +0000 UTC m=+0.055172558 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024.scope. Oct 5 04:15:49 localhost python3[69332]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aef377180d4e2fea7dd53459f19bd22887b40eda0b62ceace5f4f361d1ea581/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aef377180d4e2fea7dd53459f19bd22887b40eda0b62ceace5f4f361d1ea581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aef377180d4e2fea7dd53459f19bd22887b40eda0b62ceace5f4f361d1ea581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost podman[69338]: 2025-10-05 08:15:49.447516484 +0000 UTC m=+0.184049277 container init 032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_carver, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Oct 5 04:15:49 localhost podman[69338]: 2025-10-05 08:15:49.45878368 +0000 UTC m=+0.195316483 container start 032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_carver, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 04:15:49 localhost podman[69338]: 2025-10-05 08:15:49.46354894 +0000 UTC m=+0.200081783 container attach 032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_carver, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12) Oct 5 04:15:49 localhost podman[69512]: 2025-10-05 08:15:49.762411815 +0000 UTC m=+0.076513659 container create 7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, managed_by=tripleo_ansible, release=1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:49 localhost podman[69513]: 2025-10-05 08:15:49.78726368 +0000 UTC m=+0.100775147 container create 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public) Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f.scope. Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.scope. Oct 5 04:15:49 localhost podman[69512]: 2025-10-05 08:15:49.811310462 +0000 UTC m=+0.125412336 container init 7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_id=tripleo_step4, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, container_name=configure_cms_options, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 04:15:49 localhost podman[69513]: 2025-10-05 08:15:49.720019333 +0000 UTC m=+0.033530810 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 04:15:49 localhost podman[69516]: 2025-10-05 08:15:49.720101196 +0000 UTC m=+0.030925781 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 5 04:15:49 localhost podman[69516]: 2025-10-05 08:15:49.819842974 +0000 UTC m=+0.130667569 container create 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:15:49 localhost podman[69512]: 2025-10-05 08:15:49.724224428 +0000 UTC m=+0.038326292 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 04:15:49 localhost podman[69568]: 2025-10-05 08:15:49.825483677 +0000 UTC m=+0.068660555 container create ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_libvirt_init_secret, architecture=x86_64, io.openshift.expose-services=, release=2, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8379cddc00dbe5dcf7036da062404e87efadd4333dd5918c48ece607468bfa14/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2.scope. Oct 5 04:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:15:49 localhost podman[69513]: 2025-10-05 08:15:49.866470531 +0000 UTC m=+0.179981998 container init 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4) Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9d6f308127fb583c768901b7dd809ae5c7be1b51750a1da49e523616d3d704/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9d6f308127fb583c768901b7dd809ae5c7be1b51750a1da49e523616d3d704/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9d6f308127fb583c768901b7dd809ae5c7be1b51750a1da49e523616d3d704/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost podman[69512]: 2025-10-05 08:15:49.872995737 +0000 UTC m=+0.187097601 container start 7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options) Oct 5 04:15:49 localhost podman[69512]: 2025-10-05 08:15:49.873274635 +0000 UTC m=+0.187376519 container attach 7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, vcs-type=git, release=1, tcib_managed=true) Oct 5 04:15:49 localhost podman[69568]: 2025-10-05 08:15:49.788034231 +0000 UTC m=+0.031211099 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.scope. Oct 5 04:15:49 localhost ovs-vsctl[69625]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Oct 5 04:15:49 localhost podman[69513]: 2025-10-05 08:15:49.907503234 +0000 UTC m=+0.221014721 container start 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:15:49 localhost podman[69553]: 2025-10-05 08:15:49.91030981 +0000 UTC m=+0.181570430 container create c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-cron, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64) Oct 5 04:15:49 localhost systemd[1]: libpod-7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f.scope: Deactivated successfully. Oct 5 04:15:49 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b9a01754dad058662a16b1bcdedd274e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8570cdbf5236721445e2cc9274307680fdc914cb35802c43017704dbfad14cb1/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost podman[69568]: 2025-10-05 08:15:49.935527335 +0000 UTC m=+0.178704203 container init ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, version=17.1.9, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, release=2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:56:59, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Oct 5 04:15:49 localhost systemd[1]: var-lib-containers-storage-overlay-3af07e356536486fc9a85cfc26ba4a857c25e8c6269aa56756e1baf356f3d3ab-merged.mount: Deactivated successfully. Oct 5 04:15:49 localhost podman[69568]: 2025-10-05 08:15:49.947656195 +0000 UTC m=+0.190833063 container start ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step4, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:15:49 localhost podman[69568]: 2025-10-05 08:15:49.948051346 +0000 UTC m=+0.191228254 container attach ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_libvirt_init_secret) Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.scope. Oct 5 04:15:49 localhost podman[69553]: 2025-10-05 08:15:49.869492193 +0000 UTC m=+0.140752823 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 04:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:15:49 localhost podman[69516]: 2025-10-05 08:15:49.979624093 +0000 UTC m=+0.290448688 container init 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost podman[69616]: 2025-10-05 08:15:49.996382578 +0000 UTC m=+0.093817128 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:15:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af758892ab760708ec72004cbcaa49f310dccfa379de6a731502123c0acc8fbf/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:15:50 localhost podman[69516]: 2025-10-05 08:15:50.013323818 +0000 UTC m=+0.324148393 container start 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 5 04:15:50 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b9a01754dad058662a16b1bcdedd274e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 5 04:15:50 localhost podman[69512]: 2025-10-05 08:15:50.017969064 +0000 UTC m=+0.332070928 container died 7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=configure_cms_options, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:50 localhost podman[69616]: 2025-10-05 08:15:50.033580937 +0000 UTC m=+0.131015517 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1) Oct 5 04:15:50 localhost podman[69616]: unhealthy Oct 5 04:15:50 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:50 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 04:15:50 localhost podman[69633]: 2025-10-05 08:15:50.07787017 +0000 UTC m=+0.149065828 container cleanup 7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Oct 5 04:15:50 localhost systemd[1]: libpod-ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2.scope: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: libpod-conmon-7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f.scope: Deactivated successfully. Oct 5 04:15:50 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:15:50 localhost podman[69553]: 2025-10-05 08:15:50.176888809 +0000 UTC m=+0.448149429 container init c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-cron, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:15:50 localhost podman[69568]: 2025-10-05 08:15:50.240143967 +0000 UTC m=+0.483320845 container died ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, container_name=nova_libvirt_init_secret, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, release=2, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:15:50 localhost podman[69553]: 2025-10-05 08:15:50.252515803 +0000 UTC m=+0.523776423 container start c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, name=rhosp17/openstack-cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:15:50 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 04:15:50 localhost podman[70004]: 2025-10-05 08:15:50.265145475 +0000 UTC m=+0.175215958 container cleanup ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_libvirt_init_secret, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 04:15:50 localhost systemd[1]: libpod-conmon-ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2.scope: Deactivated successfully. Oct 5 04:15:50 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Oct 5 04:15:50 localhost podman[69786]: 2025-10-05 08:15:50.147514141 +0000 UTC m=+0.135213972 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:15:50 localhost podman[70489]: 2025-10-05 08:15:50.341695784 +0000 UTC m=+0.139156639 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:15:50 localhost podman[70489]: 2025-10-05 08:15:50.376650623 +0000 UTC m=+0.174111478 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:15:50 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:15:50 localhost podman[69786]: 2025-10-05 08:15:50.385141244 +0000 UTC m=+0.372841065 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:15:50 localhost podman[69786]: unhealthy Oct 5 04:15:50 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:50 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Failed with result 'exit-code'. Oct 5 04:15:50 localhost podman[71011]: 2025-10-05 08:15:50.457655593 +0000 UTC m=+0.064637876 container create 90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 04:15:50 localhost podman[71042]: 2025-10-05 08:15:50.480343439 +0000 UTC m=+0.083450867 container create 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:15:50 localhost systemd[1]: Started libpod-conmon-90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd.scope. Oct 5 04:15:50 localhost systemd[1]: Started libcrun container. Oct 5 04:15:50 localhost systemd[1]: Started libpod-conmon-789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.scope. Oct 5 04:15:50 localhost systemd[1]: Started libcrun container. Oct 5 04:15:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de907bff4ad3d3e1c7b6c9c4a1859902dc0671a32f334119ad23f86dab0e0a0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:50 localhost podman[71011]: 2025-10-05 08:15:50.425375836 +0000 UTC m=+0.032358139 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:15:50 localhost podman[71042]: 2025-10-05 08:15:50.43545928 +0000 UTC m=+0.038566718 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:15:50 localhost podman[71042]: 2025-10-05 08:15:50.539731441 +0000 UTC m=+0.142838869 container init 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute, version=17.1.9, tcib_managed=true, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:15:50 localhost kind_carver[69353]: [ Oct 5 04:15:50 localhost kind_carver[69353]: { Oct 5 04:15:50 localhost kind_carver[69353]: "available": false, Oct 5 04:15:50 localhost kind_carver[69353]: "ceph_device": false, Oct 5 04:15:50 localhost kind_carver[69353]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 04:15:50 localhost kind_carver[69353]: "lsm_data": {}, Oct 5 04:15:50 localhost kind_carver[69353]: "lvs": [], Oct 5 04:15:50 localhost kind_carver[69353]: "path": "/dev/sr0", Oct 5 04:15:50 localhost kind_carver[69353]: "rejected_reasons": [ Oct 5 04:15:50 localhost kind_carver[69353]: "Has a FileSystem", Oct 5 04:15:50 localhost kind_carver[69353]: "Insufficient space (<5GB)" Oct 5 04:15:50 localhost kind_carver[69353]: ], Oct 5 04:15:50 localhost kind_carver[69353]: "sys_api": { Oct 5 04:15:50 localhost kind_carver[69353]: "actuators": null, Oct 5 04:15:50 localhost kind_carver[69353]: "device_nodes": "sr0", Oct 5 04:15:50 localhost kind_carver[69353]: "human_readable_size": "482.00 KB", Oct 5 04:15:50 localhost kind_carver[69353]: "id_bus": "ata", Oct 5 04:15:50 localhost kind_carver[69353]: "model": "QEMU DVD-ROM", Oct 5 04:15:50 localhost kind_carver[69353]: "nr_requests": "2", Oct 5 04:15:50 localhost kind_carver[69353]: "partitions": {}, Oct 5 04:15:50 localhost kind_carver[69353]: "path": "/dev/sr0", Oct 5 04:15:50 localhost kind_carver[69353]: "removable": "1", Oct 5 04:15:50 localhost kind_carver[69353]: "rev": "2.5+", Oct 5 04:15:50 localhost kind_carver[69353]: "ro": "0", Oct 5 04:15:50 localhost kind_carver[69353]: "rotational": "1", Oct 5 04:15:50 localhost kind_carver[69353]: "sas_address": "", Oct 5 04:15:50 localhost kind_carver[69353]: "sas_device_handle": "", Oct 5 04:15:50 localhost kind_carver[69353]: "scheduler_mode": "mq-deadline", Oct 5 04:15:50 localhost kind_carver[69353]: "sectors": 0, Oct 5 04:15:50 localhost kind_carver[69353]: "sectorsize": "2048", Oct 5 04:15:50 localhost kind_carver[69353]: "size": 493568.0, Oct 5 04:15:50 localhost kind_carver[69353]: "support_discard": "0", Oct 5 04:15:50 localhost kind_carver[69353]: "type": "disk", Oct 5 04:15:50 localhost kind_carver[69353]: "vendor": "QEMU" Oct 5 04:15:50 localhost kind_carver[69353]: } Oct 5 04:15:50 localhost kind_carver[69353]: } Oct 5 04:15:50 localhost kind_carver[69353]: ] Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:15:50 localhost podman[71011]: 2025-10-05 08:15:50.56768674 +0000 UTC m=+0.174669053 container init 90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=setup_ovs_manager, vcs-type=git, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:15:50 localhost podman[71042]: 2025-10-05 08:15:50.574707911 +0000 UTC m=+0.177815349 container start 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:15:50 localhost podman[71011]: 2025-10-05 08:15:50.579277605 +0000 UTC m=+0.186259908 container start 90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, container_name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:15:50 localhost podman[71011]: 2025-10-05 08:15:50.580465958 +0000 UTC m=+0.187448321 container attach 90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., distribution-scope=public, container_name=setup_ovs_manager, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 5 04:15:50 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:15:50 localhost systemd[1]: libpod-032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024.scope: Deactivated successfully. Oct 5 04:15:50 localhost podman[71743]: 2025-10-05 08:15:50.645469613 +0000 UTC m=+0.070838485 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, config_id=tripleo_step4, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:15:50 localhost podman[71763]: 2025-10-05 08:15:50.680319018 +0000 UTC m=+0.064378759 container died 032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_carver, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 04:15:50 localhost podman[71763]: 2025-10-05 08:15:50.708565515 +0000 UTC m=+0.092625246 container remove 032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_carver, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, version=7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 5 04:15:50 localhost systemd[1]: libpod-conmon-032654cb1cb67aec2c0eee44c25bc314f4c6677771d656112b5d4e9d47fdf024.scope: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: tmp-crun.hSSay2.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-cd9d6f308127fb583c768901b7dd809ae5c7be1b51750a1da49e523616d3d704-merged.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccd591dc14220d59606b19f9c34f7f8a6f8080fc7a10e4749a84163ad7a046c2-userdata-shm.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-cc8055c5bfd1a63def30b4e34956e2c689f74d30957aafd1ac1064a67119c975-merged.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c0280cd0c81a91859261a571bfd98bf68df739ad086eb072d50933ebb99b32f-userdata-shm.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-8aef377180d4e2fea7dd53459f19bd22887b40eda0b62ceace5f4f361d1ea581-merged.mount: Deactivated successfully. Oct 5 04:15:50 localhost podman[71743]: 2025-10-05 08:15:50.968131944 +0000 UTC m=+0.393500786 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public) Oct 5 04:15:50 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:15:51 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Oct 5 04:15:53 localhost ovs-vsctl[71950]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Oct 5 04:15:53 localhost systemd[1]: libpod-90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd.scope: Deactivated successfully. Oct 5 04:15:53 localhost systemd[1]: libpod-90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd.scope: Consumed 2.942s CPU time. Oct 5 04:15:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:15:53 localhost podman[71951]: 2025-10-05 08:15:53.625259132 +0000 UTC m=+0.045109426 container died 90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=setup_ovs_manager, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 04:15:53 localhost systemd[1]: tmp-crun.RM3nmq.mount: Deactivated successfully. Oct 5 04:15:53 localhost podman[71952]: 2025-10-05 08:15:53.678734824 +0000 UTC m=+0.091854125 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, release=2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:15:53 localhost podman[71952]: 2025-10-05 08:15:53.692746565 +0000 UTC m=+0.105865926 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd) Oct 5 04:15:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd-userdata-shm.mount: Deactivated successfully. Oct 5 04:15:53 localhost systemd[1]: var-lib-containers-storage-overlay-1afad8e4a4ed91ca3b45701cc45e6e550300e89c47d5543828eb9fbc3ddd630b-merged.mount: Deactivated successfully. Oct 5 04:15:53 localhost podman[71951]: 2025-10-05 08:15:53.722474311 +0000 UTC m=+0.142324625 container cleanup 90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, container_name=setup_ovs_manager, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:15:53 localhost systemd[1]: libpod-conmon-90d7553397fecf30f4866426c400d5f19922cfd5ebe8836429dd12154c7ab0dd.scope: Deactivated successfully. Oct 5 04:15:53 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Oct 5 04:15:53 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:15:54 localhost podman[72078]: 2025-10-05 08:15:54.209367952 +0000 UTC m=+0.087856666 container create 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:15:54 localhost podman[72079]: 2025-10-05 08:15:54.238507093 +0000 UTC m=+0.111054776 container create cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Oct 5 04:15:54 localhost systemd[1]: Started libpod-conmon-14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.scope. Oct 5 04:15:54 localhost podman[72078]: 2025-10-05 08:15:54.165733737 +0000 UTC m=+0.044222541 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 04:15:54 localhost systemd[1]: Started libcrun container. Oct 5 04:15:54 localhost systemd[1]: Started libpod-conmon-cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.scope. Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eba47c07ad7ef3fb2d2a15bf005a78b3ed021af3b500a4d670828fd298cd628/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eba47c07ad7ef3fb2d2a15bf005a78b3ed021af3b500a4d670828fd298cd628/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eba47c07ad7ef3fb2d2a15bf005a78b3ed021af3b500a4d670828fd298cd628/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost podman[72079]: 2025-10-05 08:15:54.176046487 +0000 UTC m=+0.048594180 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:15:54 localhost systemd[1]: Started libcrun container. Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bdddd8ad241b770732fe4a7a44ffc977364417983f57ad1279078cf4c130f4b/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bdddd8ad241b770732fe4a7a44ffc977364417983f57ad1279078cf4c130f4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bdddd8ad241b770732fe4a7a44ffc977364417983f57ad1279078cf4c130f4b/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:15:54 localhost podman[72078]: 2025-10-05 08:15:54.32199713 +0000 UTC m=+0.200485884 container init 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:15:54 localhost podman[72079]: 2025-10-05 08:15:54.336083163 +0000 UTC m=+0.208630846 container init cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, release=1, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:15:54 localhost podman[72078]: 2025-10-05 08:15:54.353055753 +0000 UTC m=+0.231544497 container start 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 04:15:54 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:15:54 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 04:15:54 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:15:54 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:15:54 localhost podman[72079]: 2025-10-05 08:15:54.397724586 +0000 UTC m=+0.270272259 container start cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, vcs-type=git, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:15:54 localhost python3[69332]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=9e8d2afb999998c163aa5ea4d40dbbed --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:15:54 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:15:54 localhost podman[72105]: 2025-10-05 08:15:54.40559242 +0000 UTC m=+0.165444123 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15) Oct 5 04:15:54 localhost podman[72105]: 2025-10-05 08:15:54.456014849 +0000 UTC m=+0.215866522 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:15:54 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:15:54 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:15:54 localhost podman[72133]: 2025-10-05 08:15:54.544387489 +0000 UTC m=+0.181436367 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, container_name=ovn_controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git) Oct 5 04:15:54 localhost podman[72133]: 2025-10-05 08:15:54.56064076 +0000 UTC m=+0.197689638 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, io.openshift.expose-services=, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:15:54 localhost podman[72133]: unhealthy Oct 5 04:15:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:15:54 localhost podman[72155]: 2025-10-05 08:15:54.521000024 +0000 UTC m=+0.109295299 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:15:54 localhost podman[72155]: 2025-10-05 08:15:54.602787954 +0000 UTC m=+0.191083239 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:15:54 localhost systemd[72165]: Queued start job for default target Main User Target. Oct 5 04:15:54 localhost systemd[72165]: Created slice User Application Slice. Oct 5 04:15:54 localhost systemd[72165]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:15:54 localhost systemd[72165]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:15:54 localhost systemd[72165]: Reached target Paths. Oct 5 04:15:54 localhost systemd[72165]: Reached target Timers. Oct 5 04:15:54 localhost podman[72155]: unhealthy Oct 5 04:15:54 localhost systemd[72165]: Starting D-Bus User Message Bus Socket... Oct 5 04:15:54 localhost systemd[72165]: Starting Create User's Volatile Files and Directories... Oct 5 04:15:54 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:54 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:15:54 localhost systemd[72165]: Finished Create User's Volatile Files and Directories. Oct 5 04:15:54 localhost systemd[72165]: Listening on D-Bus User Message Bus Socket. Oct 5 04:15:54 localhost systemd[72165]: Reached target Sockets. Oct 5 04:15:54 localhost systemd[72165]: Reached target Basic System. Oct 5 04:15:54 localhost systemd[72165]: Reached target Main User Target. Oct 5 04:15:54 localhost systemd[72165]: Startup finished in 133ms. Oct 5 04:15:54 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:15:54 localhost systemd[1]: Started Session c9 of User root. Oct 5 04:15:54 localhost systemd[1]: session-c9.scope: Deactivated successfully. Oct 5 04:15:54 localhost kernel: device br-int entered promiscuous mode Oct 5 04:15:54 localhost NetworkManager[5981]: [1759652154.7122] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Oct 5 04:15:54 localhost systemd-udevd[72248]: Network interface NamePolicy= disabled on kernel command line. Oct 5 04:15:55 localhost python3[72268]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:55 localhost python3[72284]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:55 localhost python3[72300]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:55 localhost kernel: device genev_sys_6081 entered promiscuous mode Oct 5 04:15:55 localhost NetworkManager[5981]: [1759652155.7493] device (genev_sys_6081): carrier: link connected Oct 5 04:15:55 localhost NetworkManager[5981]: [1759652155.7506] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Oct 5 04:15:55 localhost python3[72319]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:56 localhost python3[72335]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:56 localhost python3[72355]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:56 localhost python3[72371]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:56 localhost python3[72388]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[72405]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[72423]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[72439]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[72455]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:58 localhost python3[72516]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652158.03197-110474-47813965653718/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:59 localhost python3[72545]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652158.03197-110474-47813965653718/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:59 localhost python3[72574]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652158.03197-110474-47813965653718/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:00 localhost python3[72603]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652158.03197-110474-47813965653718/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:00 localhost python3[72632]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652158.03197-110474-47813965653718/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:01 localhost python3[72661]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652158.03197-110474-47813965653718/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:01 localhost python3[72677]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 04:16:01 localhost systemd[1]: Reloading. Oct 5 04:16:01 localhost systemd-rc-local-generator[72698]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:01 localhost systemd-sysv-generator[72702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:02 localhost python3[72728]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:03 localhost systemd[1]: Reloading. Oct 5 04:16:03 localhost systemd-rc-local-generator[72754]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:03 localhost systemd-sysv-generator[72759]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:03 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 5 04:16:04 localhost tripleo-start-podman-container[72769]: Creating additional drop-in dependency for "ceilometer_agent_compute" (712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e) Oct 5 04:16:04 localhost systemd[1]: Reloading. Oct 5 04:16:04 localhost systemd-rc-local-generator[72826]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:04 localhost systemd-sysv-generator[72831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:04 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 5 04:16:04 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 04:16:04 localhost systemd[72165]: Activating special unit Exit the Session... Oct 5 04:16:04 localhost systemd[72165]: Stopped target Main User Target. Oct 5 04:16:04 localhost systemd[72165]: Stopped target Basic System. Oct 5 04:16:04 localhost systemd[72165]: Stopped target Paths. Oct 5 04:16:04 localhost systemd[72165]: Stopped target Sockets. Oct 5 04:16:04 localhost systemd[72165]: Stopped target Timers. Oct 5 04:16:04 localhost systemd[72165]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:16:04 localhost systemd[72165]: Closed D-Bus User Message Bus Socket. Oct 5 04:16:04 localhost systemd[72165]: Stopped Create User's Volatile Files and Directories. Oct 5 04:16:04 localhost systemd[72165]: Removed slice User Application Slice. Oct 5 04:16:04 localhost systemd[72165]: Reached target Shutdown. Oct 5 04:16:04 localhost systemd[72165]: Finished Exit the Session. Oct 5 04:16:04 localhost systemd[72165]: Reached target Exit the Session. Oct 5 04:16:04 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 04:16:04 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 04:16:04 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 04:16:04 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 04:16:04 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 04:16:04 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 04:16:04 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 04:16:05 localhost python3[72855]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:05 localhost systemd[1]: Reloading. Oct 5 04:16:05 localhost systemd-rc-local-generator[72879]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:05 localhost systemd-sysv-generator[72885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:05 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Oct 5 04:16:05 localhost systemd[1]: Started ceilometer_agent_ipmi container. Oct 5 04:16:06 localhost python3[72922]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:06 localhost systemd[1]: Reloading. Oct 5 04:16:06 localhost systemd-sysv-generator[72954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:06 localhost systemd-rc-local-generator[72948]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:06 localhost systemd[1]: Starting logrotate_crond container... Oct 5 04:16:06 localhost systemd[1]: Started logrotate_crond container. Oct 5 04:16:07 localhost python3[72991]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:07 localhost systemd[1]: Reloading. Oct 5 04:16:07 localhost systemd-rc-local-generator[73018]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:07 localhost systemd-sysv-generator[73021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:08 localhost systemd[1]: Starting nova_migration_target container... Oct 5 04:16:08 localhost systemd[1]: Started nova_migration_target container. Oct 5 04:16:08 localhost python3[73057]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:08 localhost systemd[1]: Reloading. Oct 5 04:16:08 localhost systemd-sysv-generator[73082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:08 localhost systemd-rc-local-generator[73076]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:16:09 localhost systemd[1]: Starting ovn_controller container... Oct 5 04:16:09 localhost podman[73097]: 2025-10-05 08:16:09.229585104 +0000 UTC m=+0.096243085 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:16:09 localhost tripleo-start-podman-container[73098]: Creating additional drop-in dependency for "ovn_controller" (14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc) Oct 5 04:16:09 localhost systemd[1]: Reloading. Oct 5 04:16:09 localhost podman[73097]: 2025-10-05 08:16:09.445329172 +0000 UTC m=+0.311987183 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9) Oct 5 04:16:09 localhost systemd-sysv-generator[73182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:09 localhost systemd-rc-local-generator[73179]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:09 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:16:09 localhost systemd[1]: Started ovn_controller container. Oct 5 04:16:10 localhost python3[73208]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:10 localhost systemd[1]: Reloading. Oct 5 04:16:10 localhost systemd-rc-local-generator[73234]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:10 localhost systemd-sysv-generator[73241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:11 localhost systemd[1]: Starting ovn_metadata_agent container... Oct 5 04:16:11 localhost systemd[1]: Started ovn_metadata_agent container. Oct 5 04:16:11 localhost python3[73289]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:13 localhost python3[73410]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005471150 step=4 update_config_hash_only=False Oct 5 04:16:13 localhost python3[73426]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:13 localhost python3[73443]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:16:20 localhost systemd[1]: tmp-crun.b0ym4K.mount: Deactivated successfully. Oct 5 04:16:20 localhost podman[73445]: 2025-10-05 08:16:20.730219149 +0000 UTC m=+0.136281621 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4) Oct 5 04:16:20 localhost podman[73447]: 2025-10-05 08:16:20.702742663 +0000 UTC m=+0.102140624 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:16:20 localhost podman[73445]: 2025-10-05 08:16:20.760701527 +0000 UTC m=+0.166763979 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute) Oct 5 04:16:20 localhost podman[73447]: 2025-10-05 08:16:20.787262358 +0000 UTC m=+0.186660359 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1) Oct 5 04:16:20 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:16:20 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:16:20 localhost podman[73446]: 2025-10-05 08:16:20.838274584 +0000 UTC m=+0.238472236 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 04:16:20 localhost podman[73446]: 2025-10-05 08:16:20.895930549 +0000 UTC m=+0.296128211 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-type=git, release=1, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1) Oct 5 04:16:20 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:16:21 localhost podman[73515]: 2025-10-05 08:16:21.676990587 +0000 UTC m=+0.079190001 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, release=1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:16:22 localhost podman[73515]: 2025-10-05 08:16:22.053815639 +0000 UTC m=+0.456015063 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:16:22 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:16:24 localhost podman[73546]: 2025-10-05 08:16:24.713679862 +0000 UTC m=+0.079143200 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:16:24 localhost systemd[1]: tmp-crun.ZjGZ63.mount: Deactivated successfully. Oct 5 04:16:24 localhost podman[73538]: 2025-10-05 08:16:24.771575024 +0000 UTC m=+0.148992076 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc.) Oct 5 04:16:24 localhost podman[73540]: 2025-10-05 08:16:24.78283395 +0000 UTC m=+0.151932786 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, build-date=2025-07-21T13:27:15, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:16:24 localhost podman[73540]: 2025-10-05 08:16:24.793903951 +0000 UTC m=+0.163002757 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1) Oct 5 04:16:24 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:16:24 localhost podman[73538]: 2025-10-05 08:16:24.812969598 +0000 UTC m=+0.190386610 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, tcib_managed=true, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:16:24 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:16:24 localhost podman[73539]: 2025-10-05 08:16:24.880147862 +0000 UTC m=+0.253044072 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.33.12, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=) Oct 5 04:16:24 localhost podman[73546]: 2025-10-05 08:16:24.895217121 +0000 UTC m=+0.260680389 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T16:28:53, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Oct 5 04:16:24 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:16:24 localhost podman[73539]: 2025-10-05 08:16:24.940897661 +0000 UTC m=+0.313793881 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1) Oct 5 04:16:24 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:16:32 localhost snmpd[68045]: empty variable list in _query Oct 5 04:16:32 localhost snmpd[68045]: empty variable list in _query Oct 5 04:16:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:16:40 localhost podman[73622]: 2025-10-05 08:16:40.687962358 +0000 UTC m=+0.091298940 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1) Oct 5 04:16:40 localhost podman[73622]: 2025-10-05 08:16:40.913301947 +0000 UTC m=+0.316638499 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, vcs-type=git, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:16:40 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:16:51 localhost podman[73667]: 2025-10-05 08:16:51.499612214 +0000 UTC m=+0.090108337 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:16:51 localhost systemd[1]: tmp-crun.kkSpOq.mount: Deactivated successfully. Oct 5 04:16:51 localhost podman[73666]: 2025-10-05 08:16:51.562905172 +0000 UTC m=+0.156654954 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:16:51 localhost podman[73668]: 2025-10-05 08:16:51.526455703 +0000 UTC m=+0.112396023 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, tcib_managed=true, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1) Oct 5 04:16:51 localhost podman[73666]: 2025-10-05 08:16:51.594794819 +0000 UTC m=+0.188544631 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:16:51 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:16:51 localhost podman[73668]: 2025-10-05 08:16:51.610884795 +0000 UTC m=+0.196825095 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, vcs-type=git, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:16:51 localhost podman[73667]: 2025-10-05 08:16:51.628911495 +0000 UTC m=+0.219407628 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, release=1) Oct 5 04:16:51 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:16:51 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:16:52 localhost podman[73789]: 2025-10-05 08:16:52.435182108 +0000 UTC m=+0.089848741 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, release=1, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Oct 5 04:16:52 localhost podman[73789]: 2025-10-05 08:16:52.889081792 +0000 UTC m=+0.543748455 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, release=1, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 5 04:16:52 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:16:55 localhost podman[73827]: 2025-10-05 08:16:55.692483532 +0000 UTC m=+0.091895106 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:28:44) Oct 5 04:16:55 localhost podman[73826]: 2025-10-05 08:16:55.750426415 +0000 UTC m=+0.151504734 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, release=2, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:16:55 localhost podman[73826]: 2025-10-05 08:16:55.761682851 +0000 UTC m=+0.162761190 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-type=git, build-date=2025-07-21T13:04:03, distribution-scope=public, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:16:55 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:16:55 localhost systemd[1]: tmp-crun.OvpBoQ.mount: Deactivated successfully. Oct 5 04:16:55 localhost podman[73827]: 2025-10-05 08:16:55.808711188 +0000 UTC m=+0.208122762 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.33.12, release=1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:16:55 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:16:55 localhost podman[73829]: 2025-10-05 08:16:55.860809603 +0000 UTC m=+0.254953323 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:16:55 localhost podman[73828]: 2025-10-05 08:16:55.813654493 +0000 UTC m=+0.208725659 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, build-date=2025-07-21T13:27:15, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=iscsid) Oct 5 04:16:55 localhost podman[73828]: 2025-10-05 08:16:55.894165348 +0000 UTC m=+0.289236484 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, release=1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64) Oct 5 04:16:55 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:16:55 localhost podman[73829]: 2025-10-05 08:16:55.937204367 +0000 UTC m=+0.331348037 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:16:55 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:17:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:17:11 localhost podman[73912]: 2025-10-05 08:17:11.686641478 +0000 UTC m=+0.090295142 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:17:11 localhost podman[73912]: 2025-10-05 08:17:11.883930865 +0000 UTC m=+0.287584479 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:17:11 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:17:18 localhost sshd[73941]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:17:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:17:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:17:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:17:22 localhost systemd[1]: tmp-crun.ImWrXo.mount: Deactivated successfully. Oct 5 04:17:22 localhost podman[73943]: 2025-10-05 08:17:22.714444884 +0000 UTC m=+0.097658382 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:17:22 localhost systemd[1]: tmp-crun.wu9pBM.mount: Deactivated successfully. Oct 5 04:17:22 localhost podman[73944]: 2025-10-05 08:17:22.771206496 +0000 UTC m=+0.149946313 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:17:22 localhost podman[73943]: 2025-10-05 08:17:22.781162666 +0000 UTC m=+0.164376124 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true) Oct 5 04:17:22 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:17:22 localhost podman[73944]: 2025-10-05 08:17:22.810938555 +0000 UTC m=+0.189678412 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi) Oct 5 04:17:22 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:17:22 localhost podman[73945]: 2025-10-05 08:17:22.866631257 +0000 UTC m=+0.245246701 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:17:22 localhost podman[73945]: 2025-10-05 08:17:22.875913329 +0000 UTC m=+0.254528813 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, distribution-scope=public) Oct 5 04:17:22 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:17:23 localhost podman[74017]: 2025-10-05 08:17:23.679083387 +0000 UTC m=+0.085746829 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1, io.buildah.version=1.33.12, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:17:24 localhost podman[74017]: 2025-10-05 08:17:24.051848809 +0000 UTC m=+0.458512291 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:17:24 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:17:26 localhost podman[74043]: 2025-10-05 08:17:26.686435135 +0000 UTC m=+0.087205599 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:17:26 localhost podman[74041]: 2025-10-05 08:17:26.727147231 +0000 UTC m=+0.130345251 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:17:26 localhost podman[74043]: 2025-10-05 08:17:26.732456015 +0000 UTC m=+0.133226449 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:17:26 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:17:26 localhost podman[74041]: 2025-10-05 08:17:26.777648792 +0000 UTC m=+0.180846792 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_controller, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:17:26 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:17:26 localhost podman[74042]: 2025-10-05 08:17:26.795504947 +0000 UTC m=+0.194587464 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git) Oct 5 04:17:26 localhost podman[74040]: 2025-10-05 08:17:26.838291968 +0000 UTC m=+0.244470678 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., container_name=collectd, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:04:03, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Oct 5 04:17:26 localhost podman[74040]: 2025-10-05 08:17:26.846372888 +0000 UTC m=+0.252551598 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc.) Oct 5 04:17:26 localhost podman[74042]: 2025-10-05 08:17:26.858334473 +0000 UTC m=+0.257417000 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:17:26 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:17:26 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:17:27 localhost systemd[1]: tmp-crun.3BT9Qm.mount: Deactivated successfully. Oct 5 04:17:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:17:42 localhost systemd[1]: tmp-crun.ZsLinr.mount: Deactivated successfully. Oct 5 04:17:42 localhost podman[74127]: 2025-10-05 08:17:42.682637489 +0000 UTC m=+0.090398215 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9) Oct 5 04:17:42 localhost podman[74127]: 2025-10-05 08:17:42.909027926 +0000 UTC m=+0.316788642 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, distribution-scope=public, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:17:42 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:17:53 localhost podman[74171]: 2025-10-05 08:17:53.064943907 +0000 UTC m=+0.087627460 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 5 04:17:53 localhost podman[74173]: 2025-10-05 08:17:53.120196648 +0000 UTC m=+0.136199209 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, release=1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:17:53 localhost podman[74173]: 2025-10-05 08:17:53.128033811 +0000 UTC m=+0.144036352 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-07-21T13:07:52, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:17:53 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:17:53 localhost podman[74172]: 2025-10-05 08:17:53.173133085 +0000 UTC m=+0.195358265 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi) Oct 5 04:17:53 localhost podman[74171]: 2025-10-05 08:17:53.193134528 +0000 UTC m=+0.215818071 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ceilometer_agent_compute) Oct 5 04:17:53 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:17:53 localhost podman[74172]: 2025-10-05 08:17:53.232146178 +0000 UTC m=+0.254371358 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4) Oct 5 04:17:53 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:17:53 localhost systemd[1]: tmp-crun.WcIiiK.mount: Deactivated successfully. Oct 5 04:17:53 localhost podman[74325]: 2025-10-05 08:17:53.841449063 +0000 UTC m=+0.095122754 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7) Oct 5 04:17:53 localhost podman[74325]: 2025-10-05 08:17:53.951805249 +0000 UTC m=+0.205478930 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7) Oct 5 04:17:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:17:54 localhost podman[74376]: 2025-10-05 08:17:54.184456096 +0000 UTC m=+0.081681869 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 04:17:54 localhost podman[74376]: 2025-10-05 08:17:54.568775641 +0000 UTC m=+0.466001364 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:17:54 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:17:57 localhost systemd[1]: tmp-crun.Oyc0ux.mount: Deactivated successfully. Oct 5 04:17:57 localhost podman[74493]: 2025-10-05 08:17:57.69512846 +0000 UTC m=+0.100470489 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_controller, release=1) Oct 5 04:17:57 localhost podman[74493]: 2025-10-05 08:17:57.722660908 +0000 UTC m=+0.128002957 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:17:57 localhost podman[74492]: 2025-10-05 08:17:57.747873572 +0000 UTC m=+0.153397746 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, batch=17.1_20250721.1, container_name=collectd, vendor=Red Hat, Inc., release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=) Oct 5 04:17:57 localhost podman[74492]: 2025-10-05 08:17:57.789002629 +0000 UTC m=+0.194526773 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:04:03, release=2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:17:57 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:17:57 localhost podman[74495]: 2025-10-05 08:17:57.791487787 +0000 UTC m=+0.191574004 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53) Oct 5 04:17:57 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:17:57 localhost podman[74494]: 2025-10-05 08:17:57.879859336 +0000 UTC m=+0.281505994 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, architecture=x86_64, config_id=tripleo_step3) Oct 5 04:17:57 localhost podman[74495]: 2025-10-05 08:17:57.9319093 +0000 UTC m=+0.331995477 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:17:57 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:17:58 localhost podman[74494]: 2025-10-05 08:17:58.015990873 +0000 UTC m=+0.417637481 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:17:58 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:18:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:18:13 localhost systemd[1]: tmp-crun.mhOXeH.mount: Deactivated successfully. Oct 5 04:18:13 localhost podman[74576]: 2025-10-05 08:18:13.663163242 +0000 UTC m=+0.075539930 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 04:18:13 localhost podman[74576]: 2025-10-05 08:18:13.820210512 +0000 UTC m=+0.232587190 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr) Oct 5 04:18:13 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:18:23 localhost podman[74607]: 2025-10-05 08:18:23.68867819 +0000 UTC m=+0.081414948 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:18:23 localhost podman[74607]: 2025-10-05 08:18:23.697779206 +0000 UTC m=+0.090515964 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:18:23 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:18:23 localhost podman[74606]: 2025-10-05 08:18:23.78570331 +0000 UTC m=+0.179082876 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, architecture=x86_64) Oct 5 04:18:23 localhost systemd[1]: tmp-crun.ju1ZaZ.mount: Deactivated successfully. Oct 5 04:18:23 localhost podman[74605]: 2025-10-05 08:18:23.844469086 +0000 UTC m=+0.244950124 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, release=1, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Oct 5 04:18:23 localhost podman[74606]: 2025-10-05 08:18:23.86681006 +0000 UTC m=+0.260189626 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T15:29:47) Oct 5 04:18:23 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:18:23 localhost podman[74605]: 2025-10-05 08:18:23.923672014 +0000 UTC m=+0.324153102 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:18:23 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:18:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:18:25 localhost podman[74678]: 2025-10-05 08:18:25.642809374 +0000 UTC m=+0.055134380 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:18:26 localhost podman[74678]: 2025-10-05 08:18:26.021875297 +0000 UTC m=+0.434200303 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:18:26 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:18:28 localhost systemd[1]: tmp-crun.ypuxuo.mount: Deactivated successfully. Oct 5 04:18:28 localhost podman[74699]: 2025-10-05 08:18:28.681315951 +0000 UTC m=+0.088816788 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container) Oct 5 04:18:28 localhost podman[74699]: 2025-10-05 08:18:28.694757264 +0000 UTC m=+0.102258121 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, distribution-scope=public, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:18:28 localhost systemd[1]: tmp-crun.G5XlOA.mount: Deactivated successfully. Oct 5 04:18:28 localhost podman[74707]: 2025-10-05 08:18:28.708259479 +0000 UTC m=+0.100804193 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, release=1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 04:18:28 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:18:28 localhost podman[74701]: 2025-10-05 08:18:28.75168107 +0000 UTC m=+0.148092568 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15) Oct 5 04:18:28 localhost podman[74707]: 2025-10-05 08:18:28.772706139 +0000 UTC m=+0.165250843 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team) Oct 5 04:18:28 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:18:28 localhost podman[74701]: 2025-10-05 08:18:28.789078851 +0000 UTC m=+0.185490379 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, architecture=x86_64, release=1, batch=17.1_20250721.1) Oct 5 04:18:28 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:18:28 localhost podman[74700]: 2025-10-05 08:18:28.840769686 +0000 UTC m=+0.240655498 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:18:28 localhost podman[74700]: 2025-10-05 08:18:28.894765354 +0000 UTC m=+0.294651176 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 04:18:28 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:18:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:18:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:18:44 localhost recover_tripleo_nova_virtqemud[74790]: 62622 Oct 5 04:18:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:18:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:18:44 localhost systemd[1]: tmp-crun.ffBma9.mount: Deactivated successfully. Oct 5 04:18:44 localhost podman[74783]: 2025-10-05 08:18:44.704503064 +0000 UTC m=+0.105114749 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:18:44 localhost podman[74783]: 2025-10-05 08:18:44.913066384 +0000 UTC m=+0.313678059 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, tcib_managed=true) Oct 5 04:18:44 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:18:54 localhost podman[74812]: 2025-10-05 08:18:54.682757707 +0000 UTC m=+0.090700999 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, build-date=2025-07-21T14:45:33) Oct 5 04:18:54 localhost podman[74814]: 2025-10-05 08:18:54.741681268 +0000 UTC m=+0.146212368 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 04:18:54 localhost podman[74812]: 2025-10-05 08:18:54.745421528 +0000 UTC m=+0.153364870 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:18:54 localhost podman[74814]: 2025-10-05 08:18:54.759060417 +0000 UTC m=+0.163591537 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 04:18:54 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:18:54 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:18:54 localhost podman[74813]: 2025-10-05 08:18:54.839275583 +0000 UTC m=+0.246045304 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public) Oct 5 04:18:54 localhost podman[74813]: 2025-10-05 08:18:54.870878355 +0000 UTC m=+0.277648086 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:18:54 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:18:55 localhost systemd[1]: tmp-crun.FoaEf7.mount: Deactivated successfully. Oct 5 04:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:18:56 localhost podman[74944]: 2025-10-05 08:18:56.674050544 +0000 UTC m=+0.080577366 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 5 04:18:57 localhost podman[74944]: 2025-10-05 08:18:57.051859334 +0000 UTC m=+0.458386206 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.) Oct 5 04:18:57 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:18:59 localhost podman[74982]: 2025-10-05 08:18:59.684936346 +0000 UTC m=+0.091067450 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 04:18:59 localhost podman[74982]: 2025-10-05 08:18:59.701886004 +0000 UTC m=+0.108017068 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, name=rhosp17/openstack-collectd, container_name=collectd, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:04:03, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:18:59 localhost podman[74986]: 2025-10-05 08:18:59.739154189 +0000 UTC m=+0.134514362 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 04:18:59 localhost podman[74983]: 2025-10-05 08:18:59.780097675 +0000 UTC m=+0.183412363 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, build-date=2025-07-21T13:28:44, release=1, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:18:59 localhost podman[74983]: 2025-10-05 08:18:59.831830901 +0000 UTC m=+0.235145639 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:18:59 localhost podman[74984]: 2025-10-05 08:18:59.840047813 +0000 UTC m=+0.239834116 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-iscsid, architecture=x86_64, container_name=iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:18:59 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:18:59 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:18:59 localhost podman[74984]: 2025-10-05 08:18:59.878163173 +0000 UTC m=+0.277949486 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:18:59 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:18:59 localhost podman[74986]: 2025-10-05 08:18:59.908911253 +0000 UTC m=+0.304271366 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4) Oct 5 04:18:59 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:19:05 localhost python3[75117]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:06 localhost python3[75162]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652345.6033108-114908-147924546595517/source _original_basename=tmpojo6w_9c follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:07 localhost python3[75192]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:19:09 localhost ansible-async_wrapper.py[75364]: Invoked with 615703418204 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652348.5696132-115100-234835064691448/AnsiballZ_command.py _ Oct 5 04:19:09 localhost ansible-async_wrapper.py[75367]: Starting module and watcher Oct 5 04:19:09 localhost ansible-async_wrapper.py[75367]: Start watching 75368 (3600) Oct 5 04:19:09 localhost ansible-async_wrapper.py[75368]: Start module (75368) Oct 5 04:19:09 localhost ansible-async_wrapper.py[75364]: Return async_wrapper task started. Oct 5 04:19:09 localhost python3[75388]: ansible-ansible.legacy.async_status Invoked with jid=615703418204.75364 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:19:13 localhost puppet-user[75383]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:19:13 localhost puppet-user[75383]: (file: /etc/puppet/hiera.yaml) Oct 5 04:19:13 localhost puppet-user[75383]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:19:13 localhost puppet-user[75383]: (file & line not available) Oct 5 04:19:13 localhost puppet-user[75383]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:19:13 localhost puppet-user[75383]: (file & line not available) Oct 5 04:19:13 localhost puppet-user[75383]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:19:13 localhost puppet-user[75383]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75383]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75383]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75383]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75383]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75383]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75383]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75383]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75383]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75383]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75383]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75383]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75383]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75383]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75383]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75383]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75383]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75383]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:19:13 localhost puppet-user[75383]: Notice: Compiled catalog for np0005471150.localdomain in environment production in 0.22 seconds Oct 5 04:19:13 localhost puppet-user[75383]: Notice: Applied catalog in 0.31 seconds Oct 5 04:19:13 localhost puppet-user[75383]: Application: Oct 5 04:19:13 localhost puppet-user[75383]: Initial environment: production Oct 5 04:19:13 localhost puppet-user[75383]: Converged environment: production Oct 5 04:19:13 localhost puppet-user[75383]: Run mode: user Oct 5 04:19:13 localhost puppet-user[75383]: Changes: Oct 5 04:19:13 localhost puppet-user[75383]: Events: Oct 5 04:19:13 localhost puppet-user[75383]: Resources: Oct 5 04:19:13 localhost puppet-user[75383]: Total: 19 Oct 5 04:19:13 localhost puppet-user[75383]: Time: Oct 5 04:19:13 localhost puppet-user[75383]: Filebucket: 0.00 Oct 5 04:19:13 localhost puppet-user[75383]: Package: 0.00 Oct 5 04:19:13 localhost puppet-user[75383]: Schedule: 0.00 Oct 5 04:19:13 localhost puppet-user[75383]: Exec: 0.01 Oct 5 04:19:13 localhost puppet-user[75383]: Augeas: 0.01 Oct 5 04:19:13 localhost puppet-user[75383]: File: 0.02 Oct 5 04:19:13 localhost puppet-user[75383]: Service: 0.07 Oct 5 04:19:13 localhost puppet-user[75383]: Transaction evaluation: 0.30 Oct 5 04:19:13 localhost puppet-user[75383]: Config retrieval: 0.30 Oct 5 04:19:13 localhost puppet-user[75383]: Catalog application: 0.31 Oct 5 04:19:13 localhost puppet-user[75383]: Last run: 1759652353 Oct 5 04:19:13 localhost puppet-user[75383]: Total: 0.32 Oct 5 04:19:13 localhost puppet-user[75383]: Version: Oct 5 04:19:13 localhost puppet-user[75383]: Config: 1759652353 Oct 5 04:19:13 localhost puppet-user[75383]: Puppet: 7.10.0 Oct 5 04:19:13 localhost ansible-async_wrapper.py[75368]: Module complete (75368) Oct 5 04:19:14 localhost ansible-async_wrapper.py[75367]: Done in kid B. Oct 5 04:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:19:15 localhost podman[75512]: 2025-10-05 08:19:15.677592353 +0000 UTC m=+0.085913620 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public) Oct 5 04:19:15 localhost podman[75512]: 2025-10-05 08:19:15.868647202 +0000 UTC m=+0.276968489 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, release=1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:19:15 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:19:19 localhost python3[75556]: ansible-ansible.legacy.async_status Invoked with jid=615703418204.75364 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:19:20 localhost python3[75572]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:19:20 localhost python3[75588]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:19:21 localhost python3[75638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:21 localhost python3[75656]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpo90ig0u9 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:19:22 localhost python3[75686]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:23 localhost python3[75792]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:19:24 localhost python3[75811]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:24 localhost python3[75843]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:19:25 localhost systemd[1]: tmp-crun.l23z6g.mount: Deactivated successfully. Oct 5 04:19:25 localhost podman[75894]: 2025-10-05 08:19:25.375138058 +0000 UTC m=+0.084102981 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute) Oct 5 04:19:25 localhost systemd[1]: tmp-crun.fDxAsU.mount: Deactivated successfully. Oct 5 04:19:25 localhost podman[75895]: 2025-10-05 08:19:25.390993457 +0000 UTC m=+0.099954040 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:19:25 localhost podman[75896]: 2025-10-05 08:19:25.430553405 +0000 UTC m=+0.135774807 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:19:25 localhost podman[75895]: 2025-10-05 08:19:25.44483548 +0000 UTC m=+0.153796033 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 5 04:19:25 localhost python3[75893]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:25 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:19:25 localhost podman[75896]: 2025-10-05 08:19:25.466053733 +0000 UTC m=+0.171275075 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, tcib_managed=true, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64) Oct 5 04:19:25 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:19:25 localhost podman[75894]: 2025-10-05 08:19:25.481161871 +0000 UTC m=+0.190126814 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_id=tripleo_step4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:19:25 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:19:25 localhost python3[75981]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:26 localhost python3[76043]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:26 localhost python3[76061]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:27 localhost python3[76123]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:19:27 localhost systemd[1]: tmp-crun.rvXbgi.mount: Deactivated successfully. Oct 5 04:19:27 localhost podman[76142]: 2025-10-05 08:19:27.304768041 +0000 UTC m=+0.087706879 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 04:19:27 localhost python3[76141]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:27 localhost podman[76142]: 2025-10-05 08:19:27.656805125 +0000 UTC m=+0.439743933 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 04:19:27 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:19:27 localhost python3[76226]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:28 localhost python3[76244]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:28 localhost python3[76274]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:19:28 localhost systemd[1]: Reloading. Oct 5 04:19:28 localhost systemd-rc-local-generator[76296]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:19:28 localhost systemd-sysv-generator[76300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:19:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:19:29 localhost python3[76360]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:29 localhost python3[76378]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:19:30 localhost systemd[1]: tmp-crun.HfHBBJ.mount: Deactivated successfully. Oct 5 04:19:30 localhost podman[76441]: 2025-10-05 08:19:30.388280703 +0000 UTC m=+0.094278326 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 04:19:30 localhost podman[76441]: 2025-10-05 08:19:30.402776454 +0000 UTC m=+0.108774117 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:19:30 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:19:30 localhost python3[76440]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:30 localhost podman[76443]: 2025-10-05 08:19:30.491569241 +0000 UTC m=+0.192120147 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, container_name=iscsid, release=1, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, vcs-type=git, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:19:30 localhost podman[76443]: 2025-10-05 08:19:30.524271824 +0000 UTC m=+0.224822750 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:19:30 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:19:30 localhost podman[76442]: 2025-10-05 08:19:30.549222277 +0000 UTC m=+0.253709319 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:28:44, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, tcib_managed=true) Oct 5 04:19:30 localhost podman[76442]: 2025-10-05 08:19:30.581799487 +0000 UTC m=+0.286286619 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public) Oct 5 04:19:30 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:19:30 localhost podman[76444]: 2025-10-05 08:19:30.596519114 +0000 UTC m=+0.291564172 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:19:30 localhost podman[76444]: 2025-10-05 08:19:30.672856245 +0000 UTC m=+0.367901353 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:19:30 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:19:30 localhost python3[76537]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:31 localhost python3[76575]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:19:31 localhost systemd[1]: Reloading. Oct 5 04:19:31 localhost systemd-rc-local-generator[76600]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:19:31 localhost systemd-sysv-generator[76605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:19:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:19:31 localhost systemd[1]: tmp-crun.dFdsfy.mount: Deactivated successfully. Oct 5 04:19:31 localhost systemd[1]: Starting Create netns directory... Oct 5 04:19:31 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:19:31 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:19:31 localhost systemd[1]: Finished Create netns directory. Oct 5 04:19:32 localhost python3[76632]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:19:33 localhost python3[76690]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:19:34 localhost podman[76728]: 2025-10-05 08:19:34.246821847 +0000 UTC m=+0.084464161 container create 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, container_name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1) Oct 5 04:19:34 localhost systemd[1]: Started libpod-conmon-5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.scope. Oct 5 04:19:34 localhost podman[76728]: 2025-10-05 08:19:34.202987704 +0000 UTC m=+0.040630048 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:19:34 localhost systemd[1]: Started libcrun container. Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796699655caeb5f0994b29dba9c776a53d79d4d4bd0c45c0951fd1d3486e626e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796699655caeb5f0994b29dba9c776a53d79d4d4bd0c45c0951fd1d3486e626e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796699655caeb5f0994b29dba9c776a53d79d4d4bd0c45c0951fd1d3486e626e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796699655caeb5f0994b29dba9c776a53d79d4d4bd0c45c0951fd1d3486e626e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796699655caeb5f0994b29dba9c776a53d79d4d4bd0c45c0951fd1d3486e626e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:19:34 localhost podman[76728]: 2025-10-05 08:19:34.353833866 +0000 UTC m=+0.191476230 container init 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, container_name=nova_compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:19:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:19:34 localhost podman[76728]: 2025-10-05 08:19:34.392146731 +0000 UTC m=+0.229789005 container start 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, release=1, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:19:34 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:19:34 localhost python3[76690]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:19:34 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:19:34 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:19:34 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:19:34 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:19:34 localhost podman[76749]: 2025-10-05 08:19:34.473502707 +0000 UTC m=+0.075854379 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, container_name=nova_compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., release=1, config_id=tripleo_step5, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:19:34 localhost podman[76749]: 2025-10-05 08:19:34.527107604 +0000 UTC m=+0.129459246 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, version=17.1.9, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:19:34 localhost podman[76749]: unhealthy Oct 5 04:19:34 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:19:34 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 04:19:34 localhost systemd[76768]: Queued start job for default target Main User Target. Oct 5 04:19:34 localhost systemd[76768]: Created slice User Application Slice. Oct 5 04:19:34 localhost systemd[76768]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:19:34 localhost systemd[76768]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:19:34 localhost systemd[76768]: Reached target Paths. Oct 5 04:19:34 localhost systemd[76768]: Reached target Timers. Oct 5 04:19:34 localhost systemd[76768]: Starting D-Bus User Message Bus Socket... Oct 5 04:19:34 localhost systemd[76768]: Starting Create User's Volatile Files and Directories... Oct 5 04:19:34 localhost systemd[76768]: Finished Create User's Volatile Files and Directories. Oct 5 04:19:34 localhost systemd[76768]: Listening on D-Bus User Message Bus Socket. Oct 5 04:19:34 localhost systemd[76768]: Reached target Sockets. Oct 5 04:19:34 localhost systemd[76768]: Reached target Basic System. Oct 5 04:19:34 localhost systemd[76768]: Reached target Main User Target. Oct 5 04:19:34 localhost systemd[76768]: Startup finished in 150ms. Oct 5 04:19:34 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:19:34 localhost systemd[1]: Started Session c10 of User root. Oct 5 04:19:34 localhost systemd[1]: session-c10.scope: Deactivated successfully. Oct 5 04:19:34 localhost podman[76849]: 2025-10-05 08:19:34.932759495 +0000 UTC m=+0.102782546 container create 92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git) Oct 5 04:19:34 localhost systemd[1]: Started libpod-conmon-92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf.scope. Oct 5 04:19:34 localhost podman[76849]: 2025-10-05 08:19:34.880312139 +0000 UTC m=+0.050335210 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:19:34 localhost systemd[1]: Started libcrun container. Oct 5 04:19:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd5a56456730e280de0c1ee3493eb3912f8a4b0c96361bf142b5a609935db68/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd5a56456730e280de0c1ee3493eb3912f8a4b0c96361bf142b5a609935db68/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:35 localhost podman[76849]: 2025-10-05 08:19:35.011347087 +0000 UTC m=+0.181370138 container init 92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, release=1, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp17/openstack-nova-compute) Oct 5 04:19:35 localhost podman[76849]: 2025-10-05 08:19:35.02146717 +0000 UTC m=+0.191490191 container start 92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_wait_for_compute_service, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:19:35 localhost podman[76849]: 2025-10-05 08:19:35.021761458 +0000 UTC m=+0.191784549 container attach 92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.9, container_name=nova_wait_for_compute_service, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:19:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:19:44 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 04:19:44 localhost systemd[76768]: Activating special unit Exit the Session... Oct 5 04:19:44 localhost systemd[76768]: Stopped target Main User Target. Oct 5 04:19:44 localhost systemd[76768]: Stopped target Basic System. Oct 5 04:19:44 localhost systemd[76768]: Stopped target Paths. Oct 5 04:19:44 localhost systemd[76768]: Stopped target Sockets. Oct 5 04:19:44 localhost systemd[76768]: Stopped target Timers. Oct 5 04:19:44 localhost systemd[76768]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:19:44 localhost systemd[76768]: Closed D-Bus User Message Bus Socket. Oct 5 04:19:44 localhost systemd[76768]: Stopped Create User's Volatile Files and Directories. Oct 5 04:19:44 localhost systemd[76768]: Removed slice User Application Slice. Oct 5 04:19:44 localhost systemd[76768]: Reached target Shutdown. Oct 5 04:19:44 localhost systemd[76768]: Finished Exit the Session. Oct 5 04:19:44 localhost systemd[76768]: Reached target Exit the Session. Oct 5 04:19:44 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 04:19:44 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 04:19:44 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 04:19:44 localhost recover_tripleo_nova_virtqemud[76873]: 62622 Oct 5 04:19:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:19:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:19:44 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 04:19:44 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 04:19:44 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 04:19:44 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 04:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:19:46 localhost podman[76875]: 2025-10-05 08:19:46.696724764 +0000 UTC m=+0.100950766 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 04:19:46 localhost podman[76875]: 2025-10-05 08:19:46.897895365 +0000 UTC m=+0.302121397 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd) Oct 5 04:19:46 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:19:52 localhost systemd[1]: session-28.scope: Deactivated successfully. Oct 5 04:19:52 localhost systemd[1]: session-28.scope: Consumed 2.982s CPU time. Oct 5 04:19:52 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Oct 5 04:19:52 localhost systemd-logind[760]: Removed session 28. Oct 5 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:19:55 localhost podman[76905]: 2025-10-05 08:19:55.739127932 +0000 UTC m=+0.137533374 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:19:55 localhost podman[76909]: 2025-10-05 08:19:55.786707147 +0000 UTC m=+0.180775662 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:19:55 localhost podman[76905]: 2025-10-05 08:19:55.799912993 +0000 UTC m=+0.198318425 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:19:55 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:19:55 localhost podman[76909]: 2025-10-05 08:19:55.851251439 +0000 UTC m=+0.245319874 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, config_id=tripleo_step4, build-date=2025-07-21T13:07:52) Oct 5 04:19:55 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:19:55 localhost podman[76904]: 2025-10-05 08:19:55.700607682 +0000 UTC m=+0.107271917 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, release=1, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64) Oct 5 04:19:55 localhost podman[76904]: 2025-10-05 08:19:55.931520106 +0000 UTC m=+0.338184341 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, architecture=x86_64, release=1) Oct 5 04:19:55 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:19:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:19:57 localhost podman[77004]: 2025-10-05 08:19:57.781531669 +0000 UTC m=+0.070463233 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:19:58 localhost podman[77004]: 2025-10-05 08:19:58.138846116 +0000 UTC m=+0.427777630 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:19:58 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:20:00 localhost systemd[1]: tmp-crun.OVlv4N.mount: Deactivated successfully. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:20:00 localhost podman[77076]: 2025-10-05 08:20:00.750337455 +0000 UTC m=+0.151067179 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, container_name=iscsid) Oct 5 04:20:00 localhost podman[77075]: 2025-10-05 08:20:00.70719187 +0000 UTC m=+0.110509884 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=2, vendor=Red Hat, Inc., container_name=collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:20:00 localhost podman[77102]: 2025-10-05 08:20:00.815824903 +0000 UTC m=+0.107367119 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:20:00 localhost podman[77076]: 2025-10-05 08:20:00.888510265 +0000 UTC m=+0.289239959 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15) Oct 5 04:20:00 localhost podman[77102]: 2025-10-05 08:20:00.897720633 +0000 UTC m=+0.189262809 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:20:00 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:20:00 localhost podman[77104]: 2025-10-05 08:20:00.867740035 +0000 UTC m=+0.147370860 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, release=1) Oct 5 04:20:00 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:20:00 localhost podman[77075]: 2025-10-05 08:20:00.949953624 +0000 UTC m=+0.353271628 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:04:03) Oct 5 04:20:00 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:20:01 localhost podman[77104]: 2025-10-05 08:20:01.001703511 +0000 UTC m=+0.281334306 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 04:20:01 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:20:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:20:04 localhost systemd[1]: tmp-crun.eBpScL.mount: Deactivated successfully. Oct 5 04:20:04 localhost podman[77159]: 2025-10-05 08:20:04.680464433 +0000 UTC m=+0.093228038 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, tcib_managed=true, version=17.1.9, config_id=tripleo_step5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, name=rhosp17/openstack-nova-compute) Oct 5 04:20:04 localhost podman[77159]: 2025-10-05 08:20:04.738133969 +0000 UTC m=+0.150897584 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:20:04 localhost podman[77159]: unhealthy Oct 5 04:20:04 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:20:04 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 04:20:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:20:17 localhost podman[77181]: 2025-10-05 08:20:17.693601405 +0000 UTC m=+0.099361463 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:20:17 localhost podman[77181]: 2025-10-05 08:20:17.872941447 +0000 UTC m=+0.278701525 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:20:17 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:20:26 localhost podman[77210]: 2025-10-05 08:20:26.68177967 +0000 UTC m=+0.090012622 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:20:26 localhost podman[77210]: 2025-10-05 08:20:26.737903405 +0000 UTC m=+0.146136327 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, version=17.1.9, architecture=x86_64) Oct 5 04:20:26 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:20:26 localhost systemd[1]: tmp-crun.bS9fRn.mount: Deactivated successfully. Oct 5 04:20:26 localhost podman[77211]: 2025-10-05 08:20:26.73960812 +0000 UTC m=+0.142179379 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, release=1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:20:26 localhost podman[77212]: 2025-10-05 08:20:26.791605685 +0000 UTC m=+0.191848371 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:07:52, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-type=git, container_name=logrotate_crond, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:20:26 localhost podman[77212]: 2025-10-05 08:20:26.800913505 +0000 UTC m=+0.201156181 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:20:26 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:20:26 localhost podman[77211]: 2025-10-05 08:20:26.874871652 +0000 UTC m=+0.277442901 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:20:26 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:20:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:20:28 localhost podman[77282]: 2025-10-05 08:20:28.666282913 +0000 UTC m=+0.077686909 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, release=1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:20:29 localhost podman[77282]: 2025-10-05 08:20:29.078786408 +0000 UTC m=+0.490190334 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, io.buildah.version=1.33.12) Oct 5 04:20:29 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:20:31 localhost podman[77305]: 2025-10-05 08:20:31.690074112 +0000 UTC m=+0.087319938 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:20:31 localhost podman[77305]: 2025-10-05 08:20:31.727942545 +0000 UTC m=+0.125188421 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:27:15, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-iscsid-container, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:20:31 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:20:31 localhost podman[77303]: 2025-10-05 08:20:31.744460351 +0000 UTC m=+0.148495050 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20250721.1, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, container_name=collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, tcib_managed=true) Oct 5 04:20:31 localhost podman[77303]: 2025-10-05 08:20:31.778014146 +0000 UTC m=+0.182048825 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, container_name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=2, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd) Oct 5 04:20:31 localhost systemd[1]: tmp-crun.inS79O.mount: Deactivated successfully. Oct 5 04:20:31 localhost podman[77306]: 2025-10-05 08:20:31.796870605 +0000 UTC m=+0.192678832 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, release=1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9) Oct 5 04:20:31 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:20:31 localhost podman[77306]: 2025-10-05 08:20:31.844757558 +0000 UTC m=+0.240565865 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9) Oct 5 04:20:31 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:20:31 localhost podman[77304]: 2025-10-05 08:20:31.849520556 +0000 UTC m=+0.247346498 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, tcib_managed=true, release=1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:20:31 localhost podman[77304]: 2025-10-05 08:20:31.930720648 +0000 UTC m=+0.328546600 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:20:31 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:20:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:20:35 localhost podman[77388]: 2025-10-05 08:20:35.662564383 +0000 UTC m=+0.071558742 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.buildah.version=1.33.12, container_name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container) Oct 5 04:20:35 localhost podman[77388]: 2025-10-05 08:20:35.71391355 +0000 UTC m=+0.122907969 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, distribution-scope=public, name=rhosp17/openstack-nova-compute) Oct 5 04:20:35 localhost podman[77388]: unhealthy Oct 5 04:20:35 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:20:35 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 04:20:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:20:48 localhost podman[77410]: 2025-10-05 08:20:48.662973432 +0000 UTC m=+0.076704721 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, release=1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1) Oct 5 04:20:48 localhost podman[77410]: 2025-10-05 08:20:48.894856542 +0000 UTC m=+0.308587781 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr) Oct 5 04:20:48 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:20:57 localhost podman[77440]: 2025-10-05 08:20:57.69421272 +0000 UTC m=+0.092734354 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:20:57 localhost podman[77441]: 2025-10-05 08:20:57.749021339 +0000 UTC m=+0.142372333 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, container_name=logrotate_crond, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=) Oct 5 04:20:57 localhost podman[77440]: 2025-10-05 08:20:57.752687599 +0000 UTC m=+0.151209253 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc.) Oct 5 04:20:57 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:20:57 localhost podman[77439]: 2025-10-05 08:20:57.8064408 +0000 UTC m=+0.206417504 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:20:57 localhost podman[77441]: 2025-10-05 08:20:57.815056323 +0000 UTC m=+0.208407297 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-cron, vcs-type=git, version=17.1.9, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:20:57 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:20:57 localhost podman[77439]: 2025-10-05 08:20:57.841832896 +0000 UTC m=+0.241809570 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:20:57 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:20:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:20:59 localhost recover_tripleo_nova_virtqemud[77523]: 62622 Oct 5 04:20:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:20:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:20:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:20:59 localhost systemd[1]: tmp-crun.rKJaKi.mount: Deactivated successfully. Oct 5 04:20:59 localhost podman[77538]: 2025-10-05 08:20:59.276905077 +0000 UTC m=+0.084595484 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_migration_target, release=1) Oct 5 04:20:59 localhost podman[77538]: 2025-10-05 08:20:59.675897458 +0000 UTC m=+0.483587845 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:20:59 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:21:02 localhost podman[77611]: 2025-10-05 08:21:02.692532296 +0000 UTC m=+0.097024411 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:21:02 localhost podman[77610]: 2025-10-05 08:21:02.742510695 +0000 UTC m=+0.148692385 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:21:02 localhost podman[77610]: 2025-10-05 08:21:02.75382529 +0000 UTC m=+0.160006990 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team) Oct 5 04:21:02 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:21:02 localhost podman[77613]: 2025-10-05 08:21:02.795856635 +0000 UTC m=+0.191754567 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:21:02 localhost podman[77613]: 2025-10-05 08:21:02.841150808 +0000 UTC m=+0.237048750 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:21:02 localhost podman[77612]: 2025-10-05 08:21:02.848075125 +0000 UTC m=+0.246782383 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:21:02 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:21:02 localhost podman[77611]: 2025-10-05 08:21:02.87048229 +0000 UTC m=+0.274974405 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:21:02 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:21:02 localhost podman[77612]: 2025-10-05 08:21:02.885764832 +0000 UTC m=+0.284472120 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:21:02 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:21:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:21:06 localhost systemd[1]: tmp-crun.HWWcVh.mount: Deactivated successfully. Oct 5 04:21:06 localhost podman[77694]: 2025-10-05 08:21:06.693257959 +0000 UTC m=+0.098909331 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:21:06 localhost podman[77694]: 2025-10-05 08:21:06.782899889 +0000 UTC m=+0.188551261 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:21:06 localhost podman[77694]: unhealthy Oct 5 04:21:06 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:21:06 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 04:21:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:21:19 localhost podman[77716]: 2025-10-05 08:21:19.683887574 +0000 UTC m=+0.094126822 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:21:19 localhost podman[77716]: 2025-10-05 08:21:19.880088301 +0000 UTC m=+0.290327519 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, config_id=tripleo_step1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:21:19 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:21:28 localhost podman[77746]: 2025-10-05 08:21:28.685470523 +0000 UTC m=+0.089704322 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, release=1, vcs-type=git) Oct 5 04:21:28 localhost podman[77746]: 2025-10-05 08:21:28.714842786 +0000 UTC m=+0.119076565 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:21:28 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:21:28 localhost podman[77748]: 2025-10-05 08:21:28.795986426 +0000 UTC m=+0.192633731 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, release=1, tcib_managed=true, io.openshift.expose-services=) Oct 5 04:21:28 localhost podman[77748]: 2025-10-05 08:21:28.835938625 +0000 UTC m=+0.232585900 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52) Oct 5 04:21:28 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:21:28 localhost podman[77747]: 2025-10-05 08:21:28.841573057 +0000 UTC m=+0.241785728 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 5 04:21:28 localhost podman[77747]: 2025-10-05 08:21:28.929909042 +0000 UTC m=+0.330121643 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:21:28 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:21:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:21:30 localhost podman[77820]: 2025-10-05 08:21:30.679282497 +0000 UTC m=+0.089466805 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 04:21:31 localhost podman[77820]: 2025-10-05 08:21:31.051114806 +0000 UTC m=+0.461299144 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container) Oct 5 04:21:31 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:21:33 localhost podman[77843]: 2025-10-05 08:21:33.689467111 +0000 UTC m=+0.091703017 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:21:33 localhost podman[77843]: 2025-10-05 08:21:33.70609163 +0000 UTC m=+0.108327486 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, container_name=collectd, build-date=2025-07-21T13:04:03, release=2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:21:33 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:21:33 localhost podman[77845]: 2025-10-05 08:21:33.746371697 +0000 UTC m=+0.142809757 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:21:33 localhost podman[77844]: 2025-10-05 08:21:33.810649242 +0000 UTC m=+0.209439465 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:21:33 localhost podman[77845]: 2025-10-05 08:21:33.834730253 +0000 UTC m=+0.231168253 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:21:33 localhost podman[77846]: 2025-10-05 08:21:33.848664379 +0000 UTC m=+0.241334116 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, release=1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:21:33 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:21:33 localhost podman[77844]: 2025-10-05 08:21:33.862169034 +0000 UTC m=+0.260959237 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=ovn_controller) Oct 5 04:21:33 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:21:33 localhost podman[77846]: 2025-10-05 08:21:33.887185899 +0000 UTC m=+0.279855666 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:21:33 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:21:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:21:37 localhost podman[77925]: 2025-10-05 08:21:37.670984836 +0000 UTC m=+0.083460504 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step5, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, container_name=nova_compute) Oct 5 04:21:37 localhost podman[77925]: 2025-10-05 08:21:37.708493779 +0000 UTC m=+0.120969467 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:21:37 localhost podman[77925]: unhealthy Oct 5 04:21:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:21:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 04:21:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:21:50 localhost podman[77947]: 2025-10-05 08:21:50.680339597 +0000 UTC m=+0.091642665 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1) Oct 5 04:21:50 localhost podman[77947]: 2025-10-05 08:21:50.874015785 +0000 UTC m=+0.285318833 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:21:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:21:59 localhost podman[77977]: 2025-10-05 08:21:59.672993564 +0000 UTC m=+0.081098450 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, vcs-type=git, build-date=2025-07-21T15:29:47, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:21:59 localhost podman[77977]: 2025-10-05 08:21:59.72579613 +0000 UTC m=+0.133901016 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Oct 5 04:21:59 localhost podman[77976]: 2025-10-05 08:21:59.728689698 +0000 UTC m=+0.139410275 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, release=1, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:21:59 localhost podman[77976]: 2025-10-05 08:21:59.760180808 +0000 UTC m=+0.170901355 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:21:59 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:21:59 localhost podman[77978]: 2025-10-05 08:21:59.779275444 +0000 UTC m=+0.183839444 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, build-date=2025-07-21T13:07:52, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=logrotate_crond, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 5 04:21:59 localhost podman[77978]: 2025-10-05 08:21:59.788170014 +0000 UTC m=+0.192734074 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:21:59 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:21:59 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:22:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:22:01 localhost podman[78108]: 2025-10-05 08:22:01.694877198 +0000 UTC m=+0.097290548 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-compute, version=17.1.9) Oct 5 04:22:02 localhost podman[78108]: 2025-10-05 08:22:02.088995448 +0000 UTC m=+0.491408718 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 04:22:02 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:22:04 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:22:04 localhost recover_tripleo_nova_virtqemud[78171]: 62622 Oct 5 04:22:04 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:22:04 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:22:04 localhost podman[78149]: 2025-10-05 08:22:04.683687043 +0000 UTC m=+0.082533009 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53) Oct 5 04:22:04 localhost podman[78149]: 2025-10-05 08:22:04.737569858 +0000 UTC m=+0.136415834 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, distribution-scope=public, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:22:04 localhost podman[78147]: 2025-10-05 08:22:04.740280101 +0000 UTC m=+0.145669944 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 5 04:22:04 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:22:04 localhost podman[78146]: 2025-10-05 08:22:04.796056217 +0000 UTC m=+0.203035052 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.buildah.version=1.33.12, container_name=collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:22:04 localhost podman[78148]: 2025-10-05 08:22:04.85358181 +0000 UTC m=+0.255668434 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Oct 5 04:22:04 localhost podman[78147]: 2025-10-05 08:22:04.875368207 +0000 UTC m=+0.280758050 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:22:04 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:22:04 localhost podman[78148]: 2025-10-05 08:22:04.887860095 +0000 UTC m=+0.289946759 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, container_name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:22:04 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:22:04 localhost podman[78146]: 2025-10-05 08:22:04.931193124 +0000 UTC m=+0.338171959 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1) Oct 5 04:22:04 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:22:08 localhost systemd[1]: tmp-crun.M7AIgX.mount: Deactivated successfully. Oct 5 04:22:08 localhost podman[78234]: 2025-10-05 08:22:08.694668743 +0000 UTC m=+0.101789860 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, name=rhosp17/openstack-nova-compute, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, build-date=2025-07-21T14:48:37, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container) Oct 5 04:22:08 localhost podman[78234]: 2025-10-05 08:22:08.757900929 +0000 UTC m=+0.165022046 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, build-date=2025-07-21T14:48:37, tcib_managed=true, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container) Oct 5 04:22:08 localhost podman[78234]: unhealthy Oct 5 04:22:08 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:22:08 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 04:22:13 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c6:f3:3d MACPROTO=0800 SRC=199.45.155.75 DST=38.102.83.156 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=12379 DF PROTO=TCP SPT=43320 DPT=19885 SEQ=2155855129 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A61AC1B2A000000000103030A) Oct 5 04:22:14 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c6:f3:3d MACPROTO=0800 SRC=199.45.155.75 DST=38.102.83.156 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=7588 DF PROTO=TCP SPT=43342 DPT=19885 SEQ=3076201460 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A61AC1F2E000000000103030A) Oct 5 04:22:15 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c6:f3:3d MACPROTO=0800 SRC=199.45.155.75 DST=38.102.83.156 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=7589 DF PROTO=TCP SPT=43342 DPT=19885 SEQ=3076201460 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A61AC2335000000000103030A) Oct 5 04:22:15 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c6:f3:3d MACPROTO=0800 SRC=199.45.155.75 DST=38.102.83.156 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=47974 DF PROTO=TCP SPT=43352 DPT=19885 SEQ=253657585 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A61AC234E000000000103030A) Oct 5 04:22:16 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:c6:f3:3d MACPROTO=0800 SRC=199.45.155.75 DST=38.102.83.156 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=47975 DF PROTO=TCP SPT=43352 DPT=19885 SEQ=253657585 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A61AC2775000000000103030A) Oct 5 04:22:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:22:21 localhost systemd[1]: tmp-crun.6WQ7uT.mount: Deactivated successfully. Oct 5 04:22:21 localhost podman[78256]: 2025-10-05 08:22:21.689599204 +0000 UTC m=+0.101323314 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:22:21 localhost podman[78256]: 2025-10-05 08:22:21.889594089 +0000 UTC m=+0.301318259 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, release=1, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 5 04:22:21 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:22:30 localhost systemd[1]: tmp-crun.9uUTfS.mount: Deactivated successfully. Oct 5 04:22:30 localhost podman[78286]: 2025-10-05 08:22:30.703943926 +0000 UTC m=+0.104308845 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public) Oct 5 04:22:30 localhost podman[78287]: 2025-10-05 08:22:30.711156443 +0000 UTC m=+0.108031907 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, release=1, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:22:30 localhost podman[78287]: 2025-10-05 08:22:30.747164161 +0000 UTC m=+0.144039615 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:22:30 localhost podman[78286]: 2025-10-05 08:22:30.763831114 +0000 UTC m=+0.164195972 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute) Oct 5 04:22:30 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:22:30 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:22:30 localhost podman[78288]: 2025-10-05 08:22:30.753133983 +0000 UTC m=+0.148036703 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container) Oct 5 04:22:30 localhost podman[78288]: 2025-10-05 08:22:30.837539497 +0000 UTC m=+0.232442297 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, release=1, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:22:30 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:22:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:22:32 localhost podman[78363]: 2025-10-05 08:22:32.666519943 +0000 UTC m=+0.079218814 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:22:33 localhost podman[78363]: 2025-10-05 08:22:33.069666477 +0000 UTC m=+0.482365378 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, release=1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step4) Oct 5 04:22:33 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:22:35 localhost podman[78386]: 2025-10-05 08:22:35.681816512 +0000 UTC m=+0.087551580 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.openshift.expose-services=, vcs-type=git) Oct 5 04:22:35 localhost podman[78389]: 2025-10-05 08:22:35.752706678 +0000 UTC m=+0.148764173 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 04:22:35 localhost podman[78388]: 2025-10-05 08:22:35.80905642 +0000 UTC m=+0.208047205 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64) Oct 5 04:22:35 localhost podman[78387]: 2025-10-05 08:22:35.849134598 +0000 UTC m=+0.251342430 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, build-date=2025-07-21T13:28:44) Oct 5 04:22:35 localhost podman[78386]: 2025-10-05 08:22:35.869964144 +0000 UTC m=+0.275699222 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, architecture=x86_64, build-date=2025-07-21T13:04:03, tcib_managed=true, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Oct 5 04:22:35 localhost podman[78389]: 2025-10-05 08:22:35.881774716 +0000 UTC m=+0.277832201 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 5 04:22:35 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:22:35 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:22:35 localhost podman[78388]: 2025-10-05 08:22:35.92133965 +0000 UTC m=+0.320330475 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Oct 5 04:22:35 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:22:35 localhost podman[78387]: 2025-10-05 08:22:35.973434725 +0000 UTC m=+0.375642507 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:22:35 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:22:36 localhost systemd[1]: tmp-crun.X2z4oS.mount: Deactivated successfully. Oct 5 04:22:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:22:39 localhost podman[78539]: 2025-10-05 08:22:39.6693687 +0000 UTC m=+0.077811656 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:22:39 localhost podman[78539]: 2025-10-05 08:22:39.699712403 +0000 UTC m=+0.108155379 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_id=tripleo_step5, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, container_name=nova_compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:22:39 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:22:46 localhost systemd[1]: libpod-92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf.scope: Deactivated successfully. Oct 5 04:22:47 localhost podman[78588]: 2025-10-05 08:22:47.063151289 +0000 UTC m=+0.066298213 container died 92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git) Oct 5 04:22:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf-userdata-shm.mount: Deactivated successfully. Oct 5 04:22:47 localhost systemd[1]: var-lib-containers-storage-overlay-dbd5a56456730e280de0c1ee3493eb3912f8a4b0c96361bf142b5a609935db68-merged.mount: Deactivated successfully. Oct 5 04:22:47 localhost podman[78588]: 2025-10-05 08:22:47.092701521 +0000 UTC m=+0.095848415 container cleanup 92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, release=1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step5, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:22:47 localhost systemd[1]: libpod-conmon-92cf0e9cb2e9ebbd85921511a23edea9772509d2cbcc75f51721380443510faf.scope: Deactivated successfully. Oct 5 04:22:47 localhost python3[76690]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=012327e9705c184cfee14ca411150d67 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:22:47 localhost python3[78641]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:48 localhost python3[78657]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:22:48 localhost python3[78718]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652568.1032577-119665-196295947675300/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:49 localhost python3[78734]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 04:22:49 localhost systemd[1]: Reloading. Oct 5 04:22:49 localhost systemd-rc-local-generator[78761]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:22:49 localhost systemd-sysv-generator[78766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:22:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:22:50 localhost python3[78786]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:22:50 localhost systemd[1]: Reloading. Oct 5 04:22:50 localhost systemd-rc-local-generator[78811]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:22:50 localhost systemd-sysv-generator[78816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:22:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:22:50 localhost systemd[1]: Starting nova_compute container... Oct 5 04:22:50 localhost tripleo-start-podman-container[78826]: Creating additional drop-in dependency for "nova_compute" (5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424) Oct 5 04:22:50 localhost systemd[1]: Reloading. Oct 5 04:22:50 localhost systemd-rc-local-generator[78879]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:22:50 localhost systemd-sysv-generator[78887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:22:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:22:51 localhost systemd[1]: Started nova_compute container. Oct 5 04:22:51 localhost python3[78923]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:22:52 localhost podman[78972]: 2025-10-05 08:22:52.172785884 +0000 UTC m=+0.089216565 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:22:52 localhost podman[78972]: 2025-10-05 08:22:52.383800607 +0000 UTC m=+0.300231348 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:22:52 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:22:53 localhost python3[79074]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005471150 step=5 update_config_hash_only=False Oct 5 04:22:53 localhost python3[79090]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:54 localhost python3[79106]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:23:01 localhost podman[79107]: 2025-10-05 08:23:01.692842366 +0000 UTC m=+0.096038030 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, distribution-scope=public, release=1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute) Oct 5 04:23:01 localhost systemd[1]: tmp-crun.NbSyir.mount: Deactivated successfully. Oct 5 04:23:01 localhost podman[79108]: 2025-10-05 08:23:01.752092466 +0000 UTC m=+0.151055695 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1) Oct 5 04:23:01 localhost podman[79108]: 2025-10-05 08:23:01.790838819 +0000 UTC m=+0.189802048 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1, distribution-scope=public, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:23:01 localhost podman[79109]: 2025-10-05 08:23:01.806456094 +0000 UTC m=+0.198647999 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, release=1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:23:01 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:23:01 localhost podman[79107]: 2025-10-05 08:23:01.822091218 +0000 UTC m=+0.225286912 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:23:01 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:23:01 localhost podman[79109]: 2025-10-05 08:23:01.873826784 +0000 UTC m=+0.266018699 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, tcib_managed=true, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, version=17.1.9, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:23:01 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:23:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:23:03 localhost podman[79242]: 2025-10-05 08:23:03.680960566 +0000 UTC m=+0.087878719 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1) Oct 5 04:23:04 localhost podman[79242]: 2025-10-05 08:23:04.091941524 +0000 UTC m=+0.498859677 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.) Oct 5 04:23:04 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:23:06 localhost podman[79280]: 2025-10-05 08:23:06.757048977 +0000 UTC m=+0.153039360 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, release=1, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:23:06 localhost podman[79281]: 2025-10-05 08:23:06.713917836 +0000 UTC m=+0.106217728 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:23:06 localhost podman[79278]: 2025-10-05 08:23:06.799594533 +0000 UTC m=+0.196564392 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=) Oct 5 04:23:06 localhost podman[79278]: 2025-10-05 08:23:06.839809377 +0000 UTC m=+0.236779246 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, name=rhosp17/openstack-collectd) Oct 5 04:23:06 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:23:06 localhost podman[79279]: 2025-10-05 08:23:06.855934694 +0000 UTC m=+0.252567754 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, name=rhosp17/openstack-ovn-controller, release=1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller, build-date=2025-07-21T13:28:44) Oct 5 04:23:06 localhost podman[79280]: 2025-10-05 08:23:06.873460581 +0000 UTC m=+0.269450954 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1) Oct 5 04:23:06 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:23:06 localhost podman[79279]: 2025-10-05 08:23:06.886851634 +0000 UTC m=+0.283484694 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:23:06 localhost podman[79281]: 2025-10-05 08:23:06.897932235 +0000 UTC m=+0.290232097 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:23:06 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:23:06 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:23:07 localhost systemd[1]: tmp-crun.TEaify.mount: Deactivated successfully. Oct 5 04:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:23:10 localhost systemd[1]: tmp-crun.MUIwpf.mount: Deactivated successfully. Oct 5 04:23:10 localhost podman[79363]: 2025-10-05 08:23:10.704830604 +0000 UTC m=+0.098390784 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_compute, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:23:10 localhost podman[79363]: 2025-10-05 08:23:10.736213537 +0000 UTC m=+0.129773637 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, container_name=nova_compute, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1) Oct 5 04:23:10 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:23:14 localhost sshd[79387]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:23:21 localhost sshd[79389]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:23:21 localhost systemd-logind[760]: New session 34 of user zuul. Oct 5 04:23:21 localhost systemd[1]: Started Session 34 of User zuul. Oct 5 04:23:22 localhost python3[79498]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 04:23:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:23:22 localhost podman[79505]: 2025-10-05 08:23:22.697224174 +0000 UTC m=+0.098897118 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, config_id=tripleo_step1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1) Oct 5 04:23:22 localhost podman[79505]: 2025-10-05 08:23:22.900925809 +0000 UTC m=+0.302598793 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9) Oct 5 04:23:22 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:23:31 localhost python3[79790]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Oct 5 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:23:32 localhost podman[79793]: 2025-10-05 08:23:32.691596225 +0000 UTC m=+0.093919063 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vendor=Red Hat, Inc.) Oct 5 04:23:32 localhost podman[79793]: 2025-10-05 08:23:32.72637805 +0000 UTC m=+0.128700858 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:23:32 localhost systemd[1]: tmp-crun.6nwISX.mount: Deactivated successfully. Oct 5 04:23:32 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:23:32 localhost podman[79795]: 2025-10-05 08:23:32.749447356 +0000 UTC m=+0.148163457 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, architecture=x86_64, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1) Oct 5 04:23:32 localhost podman[79794]: 2025-10-05 08:23:32.793029861 +0000 UTC m=+0.194741883 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4) Oct 5 04:23:32 localhost podman[79795]: 2025-10-05 08:23:32.835503795 +0000 UTC m=+0.234219826 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, architecture=x86_64, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:23:32 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:23:32 localhost podman[79794]: 2025-10-05 08:23:32.850872483 +0000 UTC m=+0.252584485 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, version=17.1.9, io.openshift.expose-services=) Oct 5 04:23:32 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:23:33 localhost systemd[1]: tmp-crun.MBRiwY.mount: Deactivated successfully. Oct 5 04:23:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:23:34 localhost podman[79879]: 2025-10-05 08:23:34.695301918 +0000 UTC m=+0.094385346 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37) Oct 5 04:23:35 localhost podman[79879]: 2025-10-05 08:23:35.075656993 +0000 UTC m=+0.474740411 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., container_name=nova_migration_target, io.buildah.version=1.33.12, batch=17.1_20250721.1) Oct 5 04:23:35 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:23:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:23:37 localhost recover_tripleo_nova_virtqemud[79927]: 62622 Oct 5 04:23:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:23:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:23:37 localhost systemd[1]: tmp-crun.oPNPSy.mount: Deactivated successfully. Oct 5 04:23:37 localhost podman[79901]: 2025-10-05 08:23:37.753116583 +0000 UTC m=+0.156995618 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3) Oct 5 04:23:37 localhost podman[79904]: 2025-10-05 08:23:37.798240039 +0000 UTC m=+0.196185432 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:23:37 localhost podman[79901]: 2025-10-05 08:23:37.816584717 +0000 UTC m=+0.220463722 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:23:37 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:23:37 localhost podman[79904]: 2025-10-05 08:23:37.858936638 +0000 UTC m=+0.256882001 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:23:37 localhost podman[79903]: 2025-10-05 08:23:37.71693261 +0000 UTC m=+0.118294956 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:23:37 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:23:37 localhost podman[79902]: 2025-10-05 08:23:37.882736705 +0000 UTC m=+0.283629138 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4) Oct 5 04:23:37 localhost podman[79903]: 2025-10-05 08:23:37.905287658 +0000 UTC m=+0.306650004 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid) Oct 5 04:23:37 localhost podman[79902]: 2025-10-05 08:23:37.905439132 +0000 UTC m=+0.306331505 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:23:37 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:23:37 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:23:38 localhost python3[80061]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Oct 5 04:23:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Oct 5 04:23:38 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Oct 5 04:23:38 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 04:23:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 04:23:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 04:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:23:41 localhost systemd[1]: tmp-crun.t0ErBh.mount: Deactivated successfully. Oct 5 04:23:41 localhost podman[80128]: 2025-10-05 08:23:41.695361079 +0000 UTC m=+0.095341361 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Oct 5 04:23:41 localhost podman[80128]: 2025-10-05 08:23:41.733774812 +0000 UTC m=+0.133755084 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:23:41 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:23:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:23:53 localhost podman[80154]: 2025-10-05 08:23:53.680094941 +0000 UTC m=+0.090346865 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, release=1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:23:53 localhost podman[80154]: 2025-10-05 08:23:53.908995121 +0000 UTC m=+0.319246995 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:23:53 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:24:03 localhost systemd[1]: tmp-crun.mOD1Gc.mount: Deactivated successfully. Oct 5 04:24:03 localhost podman[80184]: 2025-10-05 08:24:03.701880736 +0000 UTC m=+0.101184171 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:03 localhost podman[80183]: 2025-10-05 08:24:03.742028968 +0000 UTC m=+0.145223928 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, release=1, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Oct 5 04:24:03 localhost podman[80183]: 2025-10-05 08:24:03.773986656 +0000 UTC m=+0.177181616 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:24:03 localhost podman[80184]: 2025-10-05 08:24:03.785946561 +0000 UTC m=+0.185249986 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12) Oct 5 04:24:03 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:24:03 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:24:03 localhost podman[80185]: 2025-10-05 08:24:03.851065301 +0000 UTC m=+0.247865978 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:24:03 localhost podman[80185]: 2025-10-05 08:24:03.885040694 +0000 UTC m=+0.281841341 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, config_id=tripleo_step4, release=1, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:03 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:24:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:24:05 localhost systemd[1]: tmp-crun.Hr2BKv.mount: Deactivated successfully. Oct 5 04:24:05 localhost podman[80332]: 2025-10-05 08:24:05.45865831 +0000 UTC m=+0.092199765 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, tcib_managed=true) Oct 5 04:24:05 localhost podman[80332]: 2025-10-05 08:24:05.830174705 +0000 UTC m=+0.463716210 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T14:48:37, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:05 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:24:08 localhost podman[80356]: 2025-10-05 08:24:08.689015364 +0000 UTC m=+0.082633526 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1) Oct 5 04:24:08 localhost podman[80357]: 2025-10-05 08:24:08.701348919 +0000 UTC m=+0.088448325 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git) Oct 5 04:24:08 localhost podman[80356]: 2025-10-05 08:24:08.71279136 +0000 UTC m=+0.106409572 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, release=1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:24:08 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:24:08 localhost podman[80358]: 2025-10-05 08:24:08.777492978 +0000 UTC m=+0.161741826 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4) Oct 5 04:24:08 localhost podman[80357]: 2025-10-05 08:24:08.792180067 +0000 UTC m=+0.179279543 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, tcib_managed=true) Oct 5 04:24:08 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:24:08 localhost podman[80358]: 2025-10-05 08:24:08.822052239 +0000 UTC m=+0.206301127 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:24:08 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:24:08 localhost podman[80355]: 2025-10-05 08:24:08.798571961 +0000 UTC m=+0.192737558 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, release=2, version=17.1.9, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public) Oct 5 04:24:08 localhost podman[80355]: 2025-10-05 08:24:08.882133222 +0000 UTC m=+0.276298829 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, container_name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Oct 5 04:24:08 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:24:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:24:12 localhost systemd[1]: tmp-crun.q28kPW.mount: Deactivated successfully. Oct 5 04:24:12 localhost podman[80442]: 2025-10-05 08:24:12.685169664 +0000 UTC m=+0.093989714 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=nova_compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public) Oct 5 04:24:12 localhost podman[80442]: 2025-10-05 08:24:12.742502643 +0000 UTC m=+0.151322763 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:24:12 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:24:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:24:24 localhost podman[80470]: 2025-10-05 08:24:24.686916437 +0000 UTC m=+0.088846054 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, release=1, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:24:24 localhost podman[80470]: 2025-10-05 08:24:24.891528037 +0000 UTC m=+0.293457644 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, container_name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, vcs-type=git) Oct 5 04:24:24 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:24:34 localhost podman[80499]: 2025-10-05 08:24:34.692512174 +0000 UTC m=+0.093147592 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:34 localhost systemd[1]: tmp-crun.zmTYKK.mount: Deactivated successfully. Oct 5 04:24:34 localhost podman[80499]: 2025-10-05 08:24:34.752947636 +0000 UTC m=+0.153583024 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T14:45:33, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:34 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:24:34 localhost podman[80501]: 2025-10-05 08:24:34.799232893 +0000 UTC m=+0.193339584 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:24:34 localhost podman[80500]: 2025-10-05 08:24:34.754889418 +0000 UTC m=+0.153076149 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-07-21T15:29:47, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 5 04:24:34 localhost podman[80501]: 2025-10-05 08:24:34.81272719 +0000 UTC m=+0.206833871 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:34 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:24:34 localhost podman[80500]: 2025-10-05 08:24:34.837965456 +0000 UTC m=+0.236152217 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi) Oct 5 04:24:34 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:24:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:24:36 localhost podman[80565]: 2025-10-05 08:24:36.677746395 +0000 UTC m=+0.086189413 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:24:37 localhost podman[80565]: 2025-10-05 08:24:37.056080804 +0000 UTC m=+0.464523812 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target) Oct 5 04:24:37 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:24:38 localhost systemd[1]: session-34.scope: Deactivated successfully. Oct 5 04:24:38 localhost systemd[1]: session-34.scope: Consumed 6.195s CPU time. Oct 5 04:24:38 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Oct 5 04:24:38 localhost systemd-logind[760]: Removed session 34. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:24:39 localhost systemd[1]: tmp-crun.0AycQp.mount: Deactivated successfully. Oct 5 04:24:39 localhost podman[80632]: 2025-10-05 08:24:39.684241085 +0000 UTC m=+0.085742901 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container) Oct 5 04:24:39 localhost podman[80632]: 2025-10-05 08:24:39.692240983 +0000 UTC m=+0.093742819 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible) Oct 5 04:24:39 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:24:39 localhost podman[80633]: 2025-10-05 08:24:39.736745782 +0000 UTC m=+0.133774476 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent) Oct 5 04:24:39 localhost podman[80633]: 2025-10-05 08:24:39.772919675 +0000 UTC m=+0.169948429 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:24:39 localhost podman[80631]: 2025-10-05 08:24:39.781792776 +0000 UTC m=+0.182752457 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:24:39 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:24:39 localhost podman[80630]: 2025-10-05 08:24:39.840246224 +0000 UTC m=+0.241191205 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03) Oct 5 04:24:39 localhost podman[80630]: 2025-10-05 08:24:39.849317211 +0000 UTC m=+0.250262252 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, release=2, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:24:39 localhost podman[80631]: 2025-10-05 08:24:39.859498327 +0000 UTC m=+0.260457948 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, container_name=ovn_controller, distribution-scope=public, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Oct 5 04:24:39 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:24:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:24:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:24:43 localhost systemd[1]: tmp-crun.3ZgODO.mount: Deactivated successfully. Oct 5 04:24:43 localhost podman[80719]: 2025-10-05 08:24:43.677763575 +0000 UTC m=+0.086779859 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:24:43 localhost podman[80719]: 2025-10-05 08:24:43.732937474 +0000 UTC m=+0.141953768 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute) Oct 5 04:24:43 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:24:45 localhost sshd[80745]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:24:45 localhost systemd-logind[760]: New session 35 of user zuul. Oct 5 04:24:45 localhost systemd[1]: Started Session 35 of User zuul. Oct 5 04:24:45 localhost python3[80764]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 04:24:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:24:55 localhost systemd[1]: tmp-crun.xTrvRR.mount: Deactivated successfully. Oct 5 04:24:55 localhost podman[80767]: 2025-10-05 08:24:55.67659594 +0000 UTC m=+0.087467079 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, tcib_managed=true, vcs-type=git, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1) Oct 5 04:24:55 localhost podman[80767]: 2025-10-05 08:24:55.867759934 +0000 UTC m=+0.278631043 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:24:55 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:25:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:25:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4605 writes, 20K keys, 4605 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4605 writes, 482 syncs, 9.55 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:25:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:25:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 5059 writes, 22K keys, 5059 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5059 writes, 580 syncs, 8.72 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:25:05 localhost podman[80797]: 2025-10-05 08:25:05.685759602 +0000 UTC m=+0.083452338 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public) Oct 5 04:25:05 localhost podman[80797]: 2025-10-05 08:25:05.725744709 +0000 UTC m=+0.123437485 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Oct 5 04:25:05 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:25:05 localhost podman[80796]: 2025-10-05 08:25:05.808870127 +0000 UTC m=+0.206134352 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:25:05 localhost podman[80796]: 2025-10-05 08:25:05.849727527 +0000 UTC m=+0.246991762 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:25:05 localhost podman[80795]: 2025-10-05 08:25:05.859009269 +0000 UTC m=+0.260451268 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4) Oct 5 04:25:05 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:25:05 localhost podman[80795]: 2025-10-05 08:25:05.882889848 +0000 UTC m=+0.284331827 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 5 04:25:05 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:25:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:25:07 localhost podman[80927]: 2025-10-05 08:25:07.679162475 +0000 UTC m=+0.089299437 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, container_name=nova_migration_target, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 5 04:25:08 localhost podman[80927]: 2025-10-05 08:25:08.051330358 +0000 UTC m=+0.461467320 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 04:25:08 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:25:10 localhost systemd[1]: tmp-crun.49ZeCs.mount: Deactivated successfully. Oct 5 04:25:10 localhost podman[80984]: 2025-10-05 08:25:10.65124096 +0000 UTC m=+0.087535919 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:25:10 localhost podman[80984]: 2025-10-05 08:25:10.698836583 +0000 UTC m=+0.135131502 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:25:10 localhost systemd[1]: tmp-crun.wNDgQ6.mount: Deactivated successfully. Oct 5 04:25:10 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:25:10 localhost podman[80982]: 2025-10-05 08:25:10.720470621 +0000 UTC m=+0.165119147 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible) Oct 5 04:25:10 localhost podman[80982]: 2025-10-05 08:25:10.744061082 +0000 UTC m=+0.188709618 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:25:10 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:25:10 localhost python3[80980]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 04:25:10 localhost podman[80983]: 2025-10-05 08:25:10.765715691 +0000 UTC m=+0.206514943 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, version=17.1.9, tcib_managed=true, config_id=tripleo_step3, container_name=iscsid, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:25:10 localhost podman[80983]: 2025-10-05 08:25:10.803910629 +0000 UTC m=+0.244709871 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, version=17.1.9, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:25:10 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:25:10 localhost podman[80981]: 2025-10-05 08:25:10.818917206 +0000 UTC m=+0.265596338 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=2) Oct 5 04:25:10 localhost podman[80981]: 2025-10-05 08:25:10.832759913 +0000 UTC m=+0.279439055 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-collectd) Oct 5 04:25:10 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:25:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:25:11 localhost recover_tripleo_nova_virtqemud[81071]: 62622 Oct 5 04:25:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:25:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:25:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:25:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 04:25:14 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 04:25:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 04:25:14 localhost podman[81078]: 2025-10-05 08:25:14.580994177 +0000 UTC m=+0.131957336 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 04:25:14 localhost podman[81078]: 2025-10-05 08:25:14.64181523 +0000 UTC m=+0.192778419 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:25:14 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:25:14 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 04:25:14 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 04:25:14 localhost systemd[1]: run-r38b6df8dfb204e7699eb769779e374f0.service: Deactivated successfully. Oct 5 04:25:14 localhost systemd[1]: run-r7fe4abf3d1e1440b9488ae35585e9db1.service: Deactivated successfully. Oct 5 04:25:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:25:26 localhost podman[81249]: 2025-10-05 08:25:26.684333751 +0000 UTC m=+0.091517817 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, version=17.1.9, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:25:26 localhost podman[81249]: 2025-10-05 08:25:26.906767965 +0000 UTC m=+0.313952031 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:25:26 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:25:36 localhost systemd[1]: tmp-crun.QW7F30.mount: Deactivated successfully. Oct 5 04:25:36 localhost podman[81278]: 2025-10-05 08:25:36.694969274 +0000 UTC m=+0.095912677 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:25:36 localhost podman[81279]: 2025-10-05 08:25:36.743624485 +0000 UTC m=+0.142470152 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:25:36 localhost podman[81278]: 2025-10-05 08:25:36.754911602 +0000 UTC m=+0.155854955 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:25:36 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:25:36 localhost podman[81279]: 2025-10-05 08:25:36.780864437 +0000 UTC m=+0.179710134 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47) Oct 5 04:25:36 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:25:36 localhost podman[81280]: 2025-10-05 08:25:36.849677757 +0000 UTC m=+0.245332947 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:25:36 localhost podman[81280]: 2025-10-05 08:25:36.862946688 +0000 UTC m=+0.258601888 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:25:36 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:25:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:25:38 localhost systemd[1]: tmp-crun.8qtlLu.mount: Deactivated successfully. Oct 5 04:25:38 localhost podman[81350]: 2025-10-05 08:25:38.695287625 +0000 UTC m=+0.095969708 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, com.redhat.component=openstack-nova-compute-container) Oct 5 04:25:39 localhost podman[81350]: 2025-10-05 08:25:39.073103401 +0000 UTC m=+0.473785484 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, version=17.1.9, distribution-scope=public, release=1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:25:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:25:41 localhost podman[81421]: 2025-10-05 08:25:41.697793247 +0000 UTC m=+0.089791800 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ovn_metadata_agent) Oct 5 04:25:41 localhost systemd[1]: tmp-crun.6TeHRr.mount: Deactivated successfully. Oct 5 04:25:41 localhost podman[81419]: 2025-10-05 08:25:41.730794663 +0000 UTC m=+0.129254012 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, release=1, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_controller) Oct 5 04:25:41 localhost podman[81420]: 2025-10-05 08:25:41.794440383 +0000 UTC m=+0.189906551 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, batch=17.1_20250721.1, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:25:41 localhost podman[81421]: 2025-10-05 08:25:41.797804714 +0000 UTC m=+0.189803237 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public) Oct 5 04:25:41 localhost podman[81420]: 2025-10-05 08:25:41.809732199 +0000 UTC m=+0.205198447 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, distribution-scope=public, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12) Oct 5 04:25:41 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:25:41 localhost podman[81419]: 2025-10-05 08:25:41.820556702 +0000 UTC m=+0.219016091 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, container_name=ovn_controller, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 04:25:41 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:25:41 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:25:41 localhost podman[81418]: 2025-10-05 08:25:41.803595111 +0000 UTC m=+0.204884547 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.buildah.version=1.33.12, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:25:41 localhost podman[81418]: 2025-10-05 08:25:41.885763115 +0000 UTC m=+0.287052521 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, version=17.1.9, release=2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:25:41 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:25:42 localhost systemd[1]: tmp-crun.2wD9pq.mount: Deactivated successfully. Oct 5 04:25:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:25:45 localhost podman[81500]: 2025-10-05 08:25:45.674110669 +0000 UTC m=+0.078456843 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:25:45 localhost podman[81500]: 2025-10-05 08:25:45.731013455 +0000 UTC m=+0.135359629 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:25:45 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:25:52 localhost python3[81542]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 04:25:56 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:25:56 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:25:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:25:57 localhost podman[81672]: 2025-10-05 08:25:57.684489668 +0000 UTC m=+0.090089750 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:25:57 localhost podman[81672]: 2025-10-05 08:25:57.895456671 +0000 UTC m=+0.301056793 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 5 04:25:57 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:26:07 localhost podman[81763]: 2025-10-05 08:26:07.685496688 +0000 UTC m=+0.081159126 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:26:07 localhost podman[81763]: 2025-10-05 08:26:07.694672988 +0000 UTC m=+0.090335436 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:26:07 localhost podman[81762]: 2025-10-05 08:26:07.733563934 +0000 UTC m=+0.134594698 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, release=1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:26:07 localhost podman[81762]: 2025-10-05 08:26:07.764876476 +0000 UTC m=+0.165907210 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, release=1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:26:07 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:26:07 localhost podman[81761]: 2025-10-05 08:26:07.785776293 +0000 UTC m=+0.189658375 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:26:07 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:26:07 localhost podman[81761]: 2025-10-05 08:26:07.845155276 +0000 UTC m=+0.249037408 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, container_name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:26:07 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:26:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:26:09 localhost podman[81832]: 2025-10-05 08:26:09.676941348 +0000 UTC m=+0.082851371 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:26:10 localhost podman[81832]: 2025-10-05 08:26:10.079918808 +0000 UTC m=+0.485828781 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:26:10 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:26:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:26:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:26:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:26:11 localhost systemd[1]: tmp-crun.gbNVyp.mount: Deactivated successfully. Oct 5 04:26:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:26:11 localhost podman[81985]: 2025-10-05 08:26:11.971449513 +0000 UTC m=+0.094191980 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 04:26:12 localhost podman[81983]: 2025-10-05 08:26:12.015830759 +0000 UTC m=+0.138717670 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, release=1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Oct 5 04:26:12 localhost podman[81984]: 2025-10-05 08:26:12.063752431 +0000 UTC m=+0.185626115 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:26:12 localhost podman[81983]: 2025-10-05 08:26:12.089360087 +0000 UTC m=+0.212246998 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, tcib_managed=true) Oct 5 04:26:12 localhost podman[81984]: 2025-10-05 08:26:12.098853826 +0000 UTC m=+0.220727500 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container) Oct 5 04:26:12 localhost podman[81985]: 2025-10-05 08:26:12.131524213 +0000 UTC m=+0.254266690 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:26:12 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:26:12 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:26:12 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:26:12 localhost podman[82019]: 2025-10-05 08:26:12.215138765 +0000 UTC m=+0.233657990 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, release=2, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:26:12 localhost podman[82019]: 2025-10-05 08:26:12.228241401 +0000 UTC m=+0.246760636 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=collectd, io.buildah.version=1.33.12) Oct 5 04:26:12 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:26:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:26:16 localhost podman[82066]: 2025-10-05 08:26:16.680435263 +0000 UTC m=+0.092094583 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step5, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:26:16 localhost podman[82066]: 2025-10-05 08:26:16.709923175 +0000 UTC m=+0.121582525 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, container_name=nova_compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Oct 5 04:26:16 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:26:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:26:28 localhost podman[82092]: 2025-10-05 08:26:28.672996188 +0000 UTC m=+0.081058804 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:26:28 localhost podman[82092]: 2025-10-05 08:26:28.899183404 +0000 UTC m=+0.307246000 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, tcib_managed=true, config_id=tripleo_step1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Oct 5 04:26:28 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:26:38 localhost systemd[1]: tmp-crun.XWwTYd.mount: Deactivated successfully. Oct 5 04:26:38 localhost podman[82122]: 2025-10-05 08:26:38.664325874 +0000 UTC m=+0.074158093 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:26:38 localhost podman[82123]: 2025-10-05 08:26:38.710507153 +0000 UTC m=+0.113713033 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, architecture=x86_64, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public) Oct 5 04:26:38 localhost podman[82122]: 2025-10-05 08:26:38.715629161 +0000 UTC m=+0.125461360 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, version=17.1.9) Oct 5 04:26:38 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:26:38 localhost podman[82123]: 2025-10-05 08:26:38.745893263 +0000 UTC m=+0.149099103 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, version=17.1.9, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:26:38 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:26:38 localhost podman[82121]: 2025-10-05 08:26:38.802358449 +0000 UTC m=+0.210725498 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64) Oct 5 04:26:38 localhost podman[82121]: 2025-10-05 08:26:38.827309839 +0000 UTC m=+0.235676948 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 5 04:26:38 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:26:39 localhost systemd[1]: tmp-crun.4jjz7C.mount: Deactivated successfully. Oct 5 04:26:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:26:40 localhost systemd[1]: tmp-crun.PUFewe.mount: Deactivated successfully. Oct 5 04:26:40 localhost podman[82237]: 2025-10-05 08:26:40.685182296 +0000 UTC m=+0.096636885 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T14:48:37) Oct 5 04:26:41 localhost podman[82237]: 2025-10-05 08:26:41.081100145 +0000 UTC m=+0.492554684 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:26:41 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:26:42 localhost podman[82260]: 2025-10-05 08:26:42.681449749 +0000 UTC m=+0.088065165 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2) Oct 5 04:26:42 localhost podman[82260]: 2025-10-05 08:26:42.691761956 +0000 UTC m=+0.098377422 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public) Oct 5 04:26:42 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:26:42 localhost podman[82262]: 2025-10-05 08:26:42.789643613 +0000 UTC m=+0.189383304 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.) Oct 5 04:26:42 localhost podman[82262]: 2025-10-05 08:26:42.80179005 +0000 UTC m=+0.201529741 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:26:42 localhost podman[82261]: 2025-10-05 08:26:42.834803946 +0000 UTC m=+0.238491614 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, container_name=ovn_controller, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:26:42 localhost podman[82261]: 2025-10-05 08:26:42.862872019 +0000 UTC m=+0.266559707 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:26:42 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:26:42 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:26:42 localhost podman[82263]: 2025-10-05 08:26:42.701749464 +0000 UTC m=+0.099453041 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:26:42 localhost podman[82263]: 2025-10-05 08:26:42.937852102 +0000 UTC m=+0.335555679 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, distribution-scope=public) Oct 5 04:26:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:26:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:26:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:26:47 localhost recover_tripleo_nova_virtqemud[82350]: 62622 Oct 5 04:26:47 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:26:47 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:26:47 localhost systemd[1]: tmp-crun.w3pZ7t.mount: Deactivated successfully. Oct 5 04:26:47 localhost podman[82344]: 2025-10-05 08:26:47.685692805 +0000 UTC m=+0.091047815 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, release=1) Oct 5 04:26:47 localhost podman[82344]: 2025-10-05 08:26:47.7197762 +0000 UTC m=+0.125131180 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:26:47 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:26:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:26:59 localhost systemd[1]: tmp-crun.U1H7UH.mount: Deactivated successfully. Oct 5 04:26:59 localhost podman[82370]: 2025-10-05 08:26:59.701179339 +0000 UTC m=+0.106006167 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:26:59 localhost podman[82370]: 2025-10-05 08:26:59.898698032 +0000 UTC m=+0.303524840 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_id=tripleo_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:26:59 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:27:09 localhost podman[82399]: 2025-10-05 08:27:09.687852077 +0000 UTC m=+0.094153619 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:27:09 localhost podman[82401]: 2025-10-05 08:27:09.736474072 +0000 UTC m=+0.134760089 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, batch=17.1_20250721.1, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public) Oct 5 04:27:09 localhost podman[82399]: 2025-10-05 08:27:09.747081297 +0000 UTC m=+0.153382789 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, release=1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33) Oct 5 04:27:09 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:27:09 localhost podman[82400]: 2025-10-05 08:27:09.791276423 +0000 UTC m=+0.191020429 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:27:09 localhost podman[82401]: 2025-10-05 08:27:09.802046723 +0000 UTC m=+0.200332730 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git) Oct 5 04:27:09 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:27:09 localhost podman[82400]: 2025-10-05 08:27:09.849774684 +0000 UTC m=+0.249518680 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git) Oct 5 04:27:09 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:27:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:27:11 localhost podman[82472]: 2025-10-05 08:27:11.671782108 +0000 UTC m=+0.082221298 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:27:12 localhost podman[82472]: 2025-10-05 08:27:12.071931281 +0000 UTC m=+0.482370471 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:12 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:27:13 localhost systemd[1]: tmp-crun.nJI9eE.mount: Deactivated successfully. Oct 5 04:27:13 localhost systemd[1]: tmp-crun.2qaa2R.mount: Deactivated successfully. Oct 5 04:27:13 localhost podman[82573]: 2025-10-05 08:27:13.662650826 +0000 UTC m=+0.147870731 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:27:13 localhost podman[82575]: 2025-10-05 08:27:13.61957727 +0000 UTC m=+0.097674443 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:27:13 localhost podman[82574]: 2025-10-05 08:27:13.703873463 +0000 UTC m=+0.188434080 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.9, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid) Oct 5 04:27:13 localhost podman[82573]: 2025-10-05 08:27:13.709266158 +0000 UTC m=+0.194486063 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:27:13 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:27:13 localhost podman[82574]: 2025-10-05 08:27:13.737344992 +0000 UTC m=+0.221905589 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:27:13 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:27:13 localhost podman[82572]: 2025-10-05 08:27:13.762058885 +0000 UTC m=+0.246890009 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.buildah.version=1.33.12, release=2, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:13 localhost podman[82575]: 2025-10-05 08:27:13.778986999 +0000 UTC m=+0.257084202 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, container_name=ovn_metadata_agent) Oct 5 04:27:13 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:27:13 localhost podman[82572]: 2025-10-05 08:27:13.797160717 +0000 UTC m=+0.281991811 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, container_name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:27:13 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:27:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:27:18 localhost podman[82655]: 2025-10-05 08:27:18.681778653 +0000 UTC m=+0.087262095 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:27:18 localhost podman[82655]: 2025-10-05 08:27:18.71484284 +0000 UTC m=+0.120326322 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-nova-compute-container, vcs-type=git, build-date=2025-07-21T14:48:37, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=nova_compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public) Oct 5 04:27:18 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:27:23 localhost python3[82696]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 04:27:27 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:27:27 localhost rhsm-service[6485]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:27:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:27:30 localhost systemd[1]: tmp-crun.OIgsX0.mount: Deactivated successfully. Oct 5 04:27:30 localhost podman[82827]: 2025-10-05 08:27:30.708620101 +0000 UTC m=+0.109245354 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 04:27:30 localhost podman[82827]: 2025-10-05 08:27:30.894662815 +0000 UTC m=+0.295288058 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, container_name=metrics_qdr) Oct 5 04:27:30 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:27:40 localhost podman[82935]: 2025-10-05 08:27:40.682379851 +0000 UTC m=+0.087894000 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, release=1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc.) Oct 5 04:27:40 localhost podman[82937]: 2025-10-05 08:27:40.743541143 +0000 UTC m=+0.142282021 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Oct 5 04:27:40 localhost podman[82936]: 2025-10-05 08:27:40.773100366 +0000 UTC m=+0.175133102 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:27:40 localhost podman[82937]: 2025-10-05 08:27:40.78069021 +0000 UTC m=+0.179431108 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, container_name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:27:40 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:27:40 localhost podman[82935]: 2025-10-05 08:27:40.79520669 +0000 UTC m=+0.200720839 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step4) Oct 5 04:27:40 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:27:40 localhost podman[82936]: 2025-10-05 08:27:40.829151491 +0000 UTC m=+0.231184237 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:40 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:27:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:27:42 localhost podman[83030]: 2025-10-05 08:27:42.683462913 +0000 UTC m=+0.090425558 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4) Oct 5 04:27:43 localhost podman[83030]: 2025-10-05 08:27:43.091008385 +0000 UTC m=+0.497971060 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:27:43 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:27:44 localhost podman[83053]: 2025-10-05 08:27:44.683174669 +0000 UTC m=+0.089951986 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:27:44 localhost podman[83053]: 2025-10-05 08:27:44.697090142 +0000 UTC m=+0.103867499 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, batch=17.1_20250721.1, container_name=collectd, distribution-scope=public) Oct 5 04:27:44 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:27:44 localhost podman[83055]: 2025-10-05 08:27:44.795018891 +0000 UTC m=+0.194295627 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:27:44 localhost podman[83055]: 2025-10-05 08:27:44.80575324 +0000 UTC m=+0.205029976 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:27:44 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:27:44 localhost podman[83054]: 2025-10-05 08:27:44.896120885 +0000 UTC m=+0.298051562 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Oct 5 04:27:44 localhost podman[83056]: 2025-10-05 08:27:44.948169033 +0000 UTC m=+0.344416177 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T16:28:53, vcs-type=git) Oct 5 04:27:44 localhost podman[83054]: 2025-10-05 08:27:44.951828611 +0000 UTC m=+0.353759298 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:27:44 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:27:45 localhost podman[83056]: 2025-10-05 08:27:45.022928301 +0000 UTC m=+0.419175465 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:27:45 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:27:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:27:49 localhost podman[83140]: 2025-10-05 08:27:49.685880754 +0000 UTC m=+0.090442169 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:49 localhost podman[83140]: 2025-10-05 08:27:49.720993447 +0000 UTC m=+0.125554842 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:27:49 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:28:01 localhost python3[83179]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Oct 5 04:28:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:28:01 localhost podman[83180]: 2025-10-05 08:28:01.692254333 +0000 UTC m=+0.096606615 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Oct 5 04:28:01 localhost podman[83180]: 2025-10-05 08:28:01.901070958 +0000 UTC m=+0.305423270 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, release=1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:28:01 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:28:11 localhost podman[83208]: 2025-10-05 08:28:11.723758954 +0000 UTC m=+0.097931730 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team) Oct 5 04:28:11 localhost podman[83208]: 2025-10-05 08:28:11.755894036 +0000 UTC m=+0.130066732 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1) Oct 5 04:28:11 localhost systemd[1]: tmp-crun.AU9KdS.mount: Deactivated successfully. Oct 5 04:28:11 localhost podman[83209]: 2025-10-05 08:28:11.771597118 +0000 UTC m=+0.141707686 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:28:11 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:28:11 localhost podman[83210]: 2025-10-05 08:28:11.820890221 +0000 UTC m=+0.188692787 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1) Oct 5 04:28:11 localhost podman[83209]: 2025-10-05 08:28:11.829837732 +0000 UTC m=+0.199948350 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, architecture=x86_64, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:28:11 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:28:11 localhost podman[83210]: 2025-10-05 08:28:11.859183249 +0000 UTC m=+0.226985845 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:11 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:28:13 localhost podman[83282]: 2025-10-05 08:28:13.683200657 +0000 UTC m=+0.087303684 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, tcib_managed=true) Oct 5 04:28:14 localhost podman[83282]: 2025-10-05 08:28:14.052635235 +0000 UTC m=+0.456738292 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, container_name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:28:14 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:28:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:28:14 localhost recover_tripleo_nova_virtqemud[83338]: 62622 Oct 5 04:28:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:28:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:28:14 localhost podman[83411]: 2025-10-05 08:28:14.747706466 +0000 UTC m=+0.099569364 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:28:14 localhost podman[83430]: 2025-10-05 08:28:14.854337668 +0000 UTC m=+0.089475483 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12) Oct 5 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:28:14 localhost podman[83411]: 2025-10-05 08:28:14.880055899 +0000 UTC m=+0.231918857 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, release=553, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph) Oct 5 04:28:14 localhost podman[83430]: 2025-10-05 08:28:14.897937329 +0000 UTC m=+0.133075074 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=2, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, container_name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:28:14 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:28:15 localhost podman[83450]: 2025-10-05 08:28:14.964450515 +0000 UTC m=+0.090508182 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, config_id=tripleo_step3, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:28:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:28:15 localhost podman[83496]: 2025-10-05 08:28:15.084662802 +0000 UTC m=+0.085693802 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 04:28:15 localhost podman[83450]: 2025-10-05 08:28:15.099012328 +0000 UTC m=+0.225069945 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:28:15 localhost podman[83496]: 2025-10-05 08:28:15.115913701 +0000 UTC m=+0.116944691 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, distribution-scope=public) Oct 5 04:28:15 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:28:15 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:28:15 localhost podman[83516]: 2025-10-05 08:28:15.182017815 +0000 UTC m=+0.089891174 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53) Oct 5 04:28:15 localhost podman[83516]: 2025-10-05 08:28:15.264034108 +0000 UTC m=+0.171907487 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:28:15 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:28:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:28:20 localhost podman[83638]: 2025-10-05 08:28:20.680960963 +0000 UTC m=+0.088709092 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12) Oct 5 04:28:20 localhost podman[83638]: 2025-10-05 08:28:20.736235497 +0000 UTC m=+0.143983596 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 5 04:28:20 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:28:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:28:32 localhost podman[83664]: 2025-10-05 08:28:32.683641914 +0000 UTC m=+0.088498568 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:28:32 localhost podman[83664]: 2025-10-05 08:28:32.913924766 +0000 UTC m=+0.318781400 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, version=17.1.9, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1) Oct 5 04:28:32 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:28:42 localhost podman[83737]: 2025-10-05 08:28:42.687125213 +0000 UTC m=+0.092627408 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:28:42 localhost podman[83738]: 2025-10-05 08:28:42.733727433 +0000 UTC m=+0.138029746 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:28:42 localhost podman[83737]: 2025-10-05 08:28:42.74403642 +0000 UTC m=+0.149538575 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_compute, version=17.1.9, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:28:42 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:28:42 localhost podman[83739]: 2025-10-05 08:28:42.787159017 +0000 UTC m=+0.188443499 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:28:42 localhost podman[83739]: 2025-10-05 08:28:42.81963136 +0000 UTC m=+0.220915862 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:42 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:28:42 localhost podman[83738]: 2025-10-05 08:28:42.840655593 +0000 UTC m=+0.244957896 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:28:42 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:28:44 localhost systemd[1]: tmp-crun.VZ4MdE.mount: Deactivated successfully. Oct 5 04:28:44 localhost podman[83809]: 2025-10-05 08:28:44.686026306 +0000 UTC m=+0.093073120 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container) Oct 5 04:28:45 localhost podman[83809]: 2025-10-05 08:28:45.067652381 +0000 UTC m=+0.474699155 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container) Oct 5 04:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:28:45 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:28:45 localhost systemd[1]: tmp-crun.WggSrH.mount: Deactivated successfully. Oct 5 04:28:45 localhost podman[83831]: 2025-10-05 08:28:45.175011354 +0000 UTC m=+0.084323855 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03) Oct 5 04:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:28:45 localhost podman[83831]: 2025-10-05 08:28:45.210536497 +0000 UTC m=+0.119848988 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:28:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[83851]: 2025-10-05 08:28:45.296053193 +0000 UTC m=+0.092239037 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:28:45 localhost podman[83852]: 2025-10-05 08:28:45.346905528 +0000 UTC m=+0.138657013 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:45 localhost podman[83851]: 2025-10-05 08:28:45.399720926 +0000 UTC m=+0.195906800 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Oct 5 04:28:45 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[83884]: 2025-10-05 08:28:45.413344082 +0000 UTC m=+0.083681218 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:28:45 localhost podman[83852]: 2025-10-05 08:28:45.435176778 +0000 UTC m=+0.226928243 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 04:28:45 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[83884]: 2025-10-05 08:28:45.463013166 +0000 UTC m=+0.133350302 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_metadata_agent, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:28:45 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:28:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:28:51 localhost podman[83917]: 2025-10-05 08:28:51.684583402 +0000 UTC m=+0.091296292 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 5 04:28:51 localhost podman[83917]: 2025-10-05 08:28:51.716992842 +0000 UTC m=+0.123705722 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, distribution-scope=public, container_name=nova_compute, release=1, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:28:51 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:29:01 localhost systemd[1]: session-35.scope: Deactivated successfully. Oct 5 04:29:01 localhost systemd[1]: session-35.scope: Consumed 19.468s CPU time. Oct 5 04:29:01 localhost systemd-logind[760]: Session 35 logged out. Waiting for processes to exit. Oct 5 04:29:01 localhost systemd-logind[760]: Removed session 35. Oct 5 04:29:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:29:03 localhost podman[83943]: 2025-10-05 08:29:03.673550885 +0000 UTC m=+0.078949440 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git) Oct 5 04:29:03 localhost podman[83943]: 2025-10-05 08:29:03.888237459 +0000 UTC m=+0.293635974 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, container_name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12) Oct 5 04:29:03 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:29:13 localhost systemd[1]: tmp-crun.W6muEa.mount: Deactivated successfully. Oct 5 04:29:13 localhost podman[83972]: 2025-10-05 08:29:13.685200683 +0000 UTC m=+0.091072836 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:29:13 localhost podman[83972]: 2025-10-05 08:29:13.718826266 +0000 UTC m=+0.124698439 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, release=1, distribution-scope=public, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64) Oct 5 04:29:13 localhost systemd[1]: tmp-crun.2WSOHV.mount: Deactivated successfully. Oct 5 04:29:13 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:29:13 localhost podman[83971]: 2025-10-05 08:29:13.744439813 +0000 UTC m=+0.151858997 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true) Oct 5 04:29:13 localhost podman[83973]: 2025-10-05 08:29:13.785674351 +0000 UTC m=+0.187543786 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9) Oct 5 04:29:13 localhost podman[83971]: 2025-10-05 08:29:13.804977949 +0000 UTC m=+0.212397143 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:29:13 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:29:13 localhost podman[83973]: 2025-10-05 08:29:13.825060698 +0000 UTC m=+0.226930143 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, name=rhosp17/openstack-cron) Oct 5 04:29:13 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:29:15 localhost podman[84044]: 2025-10-05 08:29:15.689512932 +0000 UTC m=+0.088357143 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, distribution-scope=public, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4) Oct 5 04:29:15 localhost podman[84046]: 2025-10-05 08:29:15.674828648 +0000 UTC m=+0.071949572 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 04:29:15 localhost podman[84044]: 2025-10-05 08:29:15.728154669 +0000 UTC m=+0.126998840 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:29:15 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:29:15 localhost podman[84043]: 2025-10-05 08:29:15.726451254 +0000 UTC m=+0.129235131 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, release=2, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=collectd, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 04:29:15 localhost podman[84045]: 2025-10-05 08:29:15.784351498 +0000 UTC m=+0.177906026 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Oct 5 04:29:15 localhost podman[84043]: 2025-10-05 08:29:15.811863367 +0000 UTC m=+0.214647244 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, version=17.1.9, release=2, vcs-type=git, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:29:15 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:29:15 localhost podman[84045]: 2025-10-05 08:29:15.866293079 +0000 UTC m=+0.259847607 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, container_name=iscsid, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:27:15, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid) Oct 5 04:29:15 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:29:15 localhost podman[84052]: 2025-10-05 08:29:15.869083954 +0000 UTC m=+0.256942270 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:29:15 localhost podman[84052]: 2025-10-05 08:29:15.949077841 +0000 UTC m=+0.336936147 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:29:15 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:29:16 localhost podman[84046]: 2025-10-05 08:29:16.055050896 +0000 UTC m=+0.452171860 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, release=1, vcs-type=git) Oct 5 04:29:16 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:29:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:29:22 localhost podman[84226]: 2025-10-05 08:29:22.68657317 +0000 UTC m=+0.088668912 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, release=1, config_id=tripleo_step5, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:29:22 localhost podman[84226]: 2025-10-05 08:29:22.744081604 +0000 UTC m=+0.146177366 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, version=17.1.9, config_id=tripleo_step5, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:29:22 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:29:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:29:34 localhost podman[84253]: 2025-10-05 08:29:34.685431676 +0000 UTC m=+0.091132207 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, architecture=x86_64, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1) Oct 5 04:29:34 localhost podman[84253]: 2025-10-05 08:29:34.903753917 +0000 UTC m=+0.309454378 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, release=1, config_id=tripleo_step1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12) Oct 5 04:29:34 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:29:40 localhost sshd[84281]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:29:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:29:44 localhost recover_tripleo_nova_virtqemud[84341]: 62622 Oct 5 04:29:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:29:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:29:44 localhost podman[84327]: 2025-10-05 08:29:44.680088248 +0000 UTC m=+0.084902279 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:29:44 localhost podman[84328]: 2025-10-05 08:29:44.728161229 +0000 UTC m=+0.131864231 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12) Oct 5 04:29:44 localhost podman[84327]: 2025-10-05 08:29:44.73937912 +0000 UTC m=+0.144193191 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_compute) Oct 5 04:29:44 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:29:44 localhost podman[84328]: 2025-10-05 08:29:44.784945314 +0000 UTC m=+0.188648336 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4) Oct 5 04:29:44 localhost podman[84329]: 2025-10-05 08:29:44.691526775 +0000 UTC m=+0.087177401 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vendor=Red Hat, Inc.) Oct 5 04:29:44 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:29:44 localhost podman[84329]: 2025-10-05 08:29:44.828813661 +0000 UTC m=+0.224464297 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4) Oct 5 04:29:44 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:29:45 localhost systemd[1]: tmp-crun.lMSyqT.mount: Deactivated successfully. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:29:46 localhost podman[84400]: 2025-10-05 08:29:46.683233506 +0000 UTC m=+0.081669584 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.openshift.expose-services=, release=2, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, version=17.1.9, architecture=x86_64, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:29:46 localhost podman[84400]: 2025-10-05 08:29:46.695646259 +0000 UTC m=+0.094082307 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, container_name=collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-collectd, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:29:46 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:29:46 localhost podman[84401]: 2025-10-05 08:29:46.731608904 +0000 UTC m=+0.128441958 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, release=1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4) Oct 5 04:29:46 localhost podman[84404]: 2025-10-05 08:29:46.785904572 +0000 UTC m=+0.174609559 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:29:46 localhost podman[84404]: 2025-10-05 08:29:46.829742709 +0000 UTC m=+0.218447726 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:29:46 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:29:46 localhost podman[84402]: 2025-10-05 08:29:46.850690621 +0000 UTC m=+0.240512158 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:29:46 localhost podman[84401]: 2025-10-05 08:29:46.860971638 +0000 UTC m=+0.257804732 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Oct 5 04:29:46 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:29:46 localhost podman[84402]: 2025-10-05 08:29:46.888907058 +0000 UTC m=+0.278728625 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:29:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:29:46 localhost podman[84403]: 2025-10-05 08:29:46.947936942 +0000 UTC m=+0.338870728 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, release=1, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:29:47 localhost podman[84403]: 2025-10-05 08:29:47.323741981 +0000 UTC m=+0.714675787 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, release=1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-07-21T14:48:37) Oct 5 04:29:47 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:29:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:29:53 localhost podman[84511]: 2025-10-05 08:29:53.683972462 +0000 UTC m=+0.090611543 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 04:29:53 localhost podman[84511]: 2025-10-05 08:29:53.740081558 +0000 UTC m=+0.146720619 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T14:48:37, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 5 04:29:53 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:30:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:30:05 localhost podman[84557]: 2025-10-05 08:30:05.693746393 +0000 UTC m=+0.098402133 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, container_name=metrics_qdr, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-qdrouterd) Oct 5 04:30:05 localhost podman[84557]: 2025-10-05 08:30:05.910559044 +0000 UTC m=+0.315214754 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, release=1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9) Oct 5 04:30:05 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:30:12 localhost systemd-logind[760]: Existing logind session ID 29 used by new audit session, ignoring. Oct 5 04:30:12 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:30:12 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:30:12 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:30:12 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:30:12 localhost systemd[84940]: Queued start job for default target Main User Target. Oct 5 04:30:12 localhost systemd[84940]: Created slice User Application Slice. Oct 5 04:30:12 localhost systemd[84940]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:30:12 localhost systemd[84940]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:30:12 localhost systemd[84940]: Reached target Paths. Oct 5 04:30:12 localhost systemd[84940]: Reached target Timers. Oct 5 04:30:12 localhost systemd[84940]: Starting D-Bus User Message Bus Socket... Oct 5 04:30:12 localhost systemd[84940]: Starting Create User's Volatile Files and Directories... Oct 5 04:30:12 localhost systemd[84940]: Finished Create User's Volatile Files and Directories. Oct 5 04:30:12 localhost systemd[84940]: Listening on D-Bus User Message Bus Socket. Oct 5 04:30:12 localhost systemd[84940]: Reached target Sockets. Oct 5 04:30:12 localhost systemd[84940]: Reached target Basic System. Oct 5 04:30:12 localhost systemd[84940]: Reached target Main User Target. Oct 5 04:30:12 localhost systemd[84940]: Startup finished in 146ms. Oct 5 04:30:12 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:30:12 localhost systemd[1]: Started Session c11 of User root. Oct 5 04:30:13 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Oct 5 04:30:13 localhost kernel: device tap4db5c636-30 entered promiscuous mode Oct 5 04:30:13 localhost NetworkManager[5981]: [1759653013.6971] manager: (tap4db5c636-30): new Tun device (/org/freedesktop/NetworkManager/Devices/13) Oct 5 04:30:13 localhost systemd-udevd[84975]: Network interface NamePolicy= disabled on kernel command line. Oct 5 04:30:13 localhost NetworkManager[5981]: [1759653013.7185] device (tap4db5c636-30): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 04:30:13 localhost NetworkManager[5981]: [1759653013.7191] device (tap4db5c636-30): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 5 04:30:13 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 5 04:30:13 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Oct 5 04:30:13 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Oct 5 04:30:13 localhost systemd-machined[84982]: New machine qemu-1-instance-00000002. Oct 5 04:30:13 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000002. Oct 5 04:30:13 localhost NetworkManager[5981]: [1759653013.9926] manager: (tap20d6a6dc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/14) Oct 5 04:30:13 localhost systemd-udevd[84974]: Network interface NamePolicy= disabled on kernel command line. Oct 5 04:30:14 localhost NetworkManager[5981]: [1759653014.0531] device (tap20d6a6dc-00): carrier: link connected Oct 5 04:30:14 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap20d6a6dc-01: link becomes ready Oct 5 04:30:14 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap20d6a6dc-00: link becomes ready Oct 5 04:30:14 localhost kernel: device tap20d6a6dc-00 entered promiscuous mode Oct 5 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:30:15 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Oct 5 04:30:15 localhost podman[85079]: 2025-10-05 08:30:15.627738319 +0000 UTC m=+0.093157142 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc.) Oct 5 04:30:15 localhost systemd[1]: tmp-crun.V52UqV.mount: Deactivated successfully. Oct 5 04:30:15 localhost podman[85079]: 2025-10-05 08:30:15.680971188 +0000 UTC m=+0.146390051 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:30:15 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:30:15 localhost podman[85078]: 2025-10-05 08:30:15.685590492 +0000 UTC m=+0.150299246 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:30:15 localhost podman[85078]: 2025-10-05 08:30:15.771810017 +0000 UTC m=+0.236518751 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, io.buildah.version=1.33.12, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:30:15 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:30:15 localhost podman[85080]: 2025-10-05 08:30:15.739474118 +0000 UTC m=+0.205058346 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, release=1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 5 04:30:15 localhost podman[85080]: 2025-10-05 08:30:15.819030305 +0000 UTC m=+0.284614503 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:30:15 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:30:15 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Oct 5 04:30:16 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Oct 5 04:30:16 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Oct 5 04:30:16 localhost podman[85182]: 2025-10-05 08:30:16.231061086 +0000 UTC m=+0.075749864 container create f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public) Oct 5 04:30:16 localhost systemd[1]: Started libpod-conmon-f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4.scope. Oct 5 04:30:16 localhost systemd[1]: Started libcrun container. Oct 5 04:30:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfe7524ac8a56ed96644e8d260c683ffdbe8eccf2a9392e52f36c399e070e93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:30:16 localhost podman[85182]: 2025-10-05 08:30:16.199004915 +0000 UTC m=+0.043693713 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:30:16 localhost podman[85182]: 2025-10-05 08:30:16.30235658 +0000 UTC m=+0.147045398 container init f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:30:16 localhost podman[85182]: 2025-10-05 08:30:16.309436761 +0000 UTC m=+0.154125579 container start f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:30:16 localhost setroubleshoot[85081]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count. For complete SELinux messages run: sealert -l bc7c4c30-a01c-4646-a4d2-6e9ebe46b7a9 Oct 5 04:30:16 localhost setroubleshoot[85081]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count.#012#012***** Plugin qemu_file_image (98.8 confidence) suggests *******************#012#012If max_map_count is a virtualization target#012Then you need to change the label on max_map_count'#012Do#012# semanage fcontext -a -t virt_image_t 'max_map_count'#012# restorecon -v 'max_map_count'#012#012***** Plugin catchall (2.13 confidence) suggests **************************#012#012If you believe that qemu-kvm should be allowed read access on the max_map_count file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'qemu-kvm' --raw | audit2allow -M my-qemukvm#012# semodule -X 300 -i my-qemukvm.pp#012 Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:30:17 localhost podman[85216]: 2025-10-05 08:30:17.711570472 +0000 UTC m=+0.101982458 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step3, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:30:17 localhost podman[85216]: 2025-10-05 08:30:17.722066884 +0000 UTC m=+0.112478880 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15) Oct 5 04:30:17 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:30:17 localhost podman[85220]: 2025-10-05 08:30:17.766196709 +0000 UTC m=+0.150054829 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:30:17 localhost systemd[1]: tmp-crun.8AhZt6.mount: Deactivated successfully. Oct 5 04:30:17 localhost podman[85215]: 2025-10-05 08:30:17.809129452 +0000 UTC m=+0.202837816 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:30:17 localhost podman[85215]: 2025-10-05 08:30:17.856045261 +0000 UTC m=+0.249753605 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:30:17 localhost podman[85220]: 2025-10-05 08:30:17.86197131 +0000 UTC m=+0.245829490 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, distribution-scope=public, version=17.1.9) Oct 5 04:30:17 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:30:17 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:30:17 localhost podman[85217]: 2025-10-05 08:30:17.884589958 +0000 UTC m=+0.267103682 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 5 04:30:17 localhost podman[85214]: 2025-10-05 08:30:17.855950609 +0000 UTC m=+0.250644810 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.buildah.version=1.33.12) Oct 5 04:30:17 localhost podman[85214]: 2025-10-05 08:30:17.934517498 +0000 UTC m=+0.329211649 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, name=rhosp17/openstack-collectd, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, container_name=collectd, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:30:17 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:30:18 localhost podman[85217]: 2025-10-05 08:30:18.251121748 +0000 UTC m=+0.633635482 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9) Oct 5 04:30:18 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:30:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:30:24 localhost systemd[1]: tmp-crun.7RXAZY.mount: Deactivated successfully. Oct 5 04:30:24 localhost podman[85403]: 2025-10-05 08:30:24.730646431 +0000 UTC m=+0.134789670 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_compute, vcs-type=git, release=1, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:30:24 localhost podman[85403]: 2025-10-05 08:30:24.76752025 +0000 UTC m=+0.171663479 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, build-date=2025-07-21T14:48:37, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, managed_by=tripleo_ansible) Oct 5 04:30:24 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:30:26 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Oct 5 04:30:27 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Oct 5 04:30:32 localhost snmpd[68045]: empty variable list in _query Oct 5 04:30:32 localhost snmpd[68045]: empty variable list in _query Oct 5 04:30:35 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35438 [05/Oct/2025:08:30:34.439] listener listener/metadata 0/0/0/1313/1313 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Oct 5 04:30:35 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35442 [05/Oct/2025:08:30:35.866] listener listener/metadata 0/0/0/16/16 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Oct 5 04:30:35 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35458 [05/Oct/2025:08:30:35.939] listener listener/metadata 0/0/0/14/14 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35470 [05/Oct/2025:08:30:36.003] listener listener/metadata 0/0/0/12/12 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35476 [05/Oct/2025:08:30:36.060] listener listener/metadata 0/0/0/11/11 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35490 [05/Oct/2025:08:30:36.115] listener listener/metadata 0/0/0/15/15 200 132 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35502 [05/Oct/2025:08:30:36.183] listener listener/metadata 0/0/0/13/13 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35516 [05/Oct/2025:08:30:36.236] listener listener/metadata 0/0/0/12/12 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35532 [05/Oct/2025:08:30:36.293] listener listener/metadata 0/0/0/15/15 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35538 [05/Oct/2025:08:30:36.346] listener listener/metadata 0/0/0/12/12 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35548 [05/Oct/2025:08:30:36.399] listener listener/metadata 0/0/0/19/19 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35560 [05/Oct/2025:08:30:36.450] listener listener/metadata 0/0/0/19/19 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35566 [05/Oct/2025:08:30:36.505] listener listener/metadata 0/0/0/16/16 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35572 [05/Oct/2025:08:30:36.556] listener listener/metadata 0/0/0/19/19 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Oct 5 04:30:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35584 [05/Oct/2025:08:30:36.621] listener listener/metadata 0/0/0/11/11 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Oct 5 04:30:36 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[85205]: 192.168.0.56:35586 [05/Oct/2025:08:30:36.675] listener listener/metadata 0/0/0/11/11 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Oct 5 04:30:36 localhost systemd[1]: tmp-crun.hlTdZW.mount: Deactivated successfully. Oct 5 04:30:36 localhost podman[85429]: 2025-10-05 08:30:36.717204629 +0000 UTC m=+0.113273643 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:30:36 localhost podman[85429]: 2025-10-05 08:30:36.955475696 +0000 UTC m=+0.351544680 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 04:30:36 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:30:46 localhost podman[85503]: 2025-10-05 08:30:46.670817368 +0000 UTC m=+0.077473212 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Oct 5 04:30:46 localhost podman[85503]: 2025-10-05 08:30:46.728573598 +0000 UTC m=+0.135229442 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:30:46 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:30:46 localhost podman[85502]: 2025-10-05 08:30:46.731355732 +0000 UTC m=+0.140168434 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Oct 5 04:30:46 localhost systemd[1]: tmp-crun.xOC6Yd.mount: Deactivated successfully. Oct 5 04:30:46 localhost podman[85504]: 2025-10-05 08:30:46.794660322 +0000 UTC m=+0.195682794 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, build-date=2025-07-21T13:07:52) Oct 5 04:30:46 localhost podman[85502]: 2025-10-05 08:30:46.816386396 +0000 UTC m=+0.225199098 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, container_name=ceilometer_agent_compute, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public) Oct 5 04:30:46 localhost podman[85504]: 2025-10-05 08:30:46.832094417 +0000 UTC m=+0.233116889 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, container_name=logrotate_crond) Oct 5 04:30:46 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:30:46 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:30:48 localhost podman[85574]: 2025-10-05 08:30:48.701625118 +0000 UTC m=+0.102164484 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:30:48 localhost podman[85574]: 2025-10-05 08:30:48.746260986 +0000 UTC m=+0.146800412 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:28:44) Oct 5 04:30:48 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:30:48 localhost systemd[1]: tmp-crun.qZF29j.mount: Deactivated successfully. Oct 5 04:30:48 localhost podman[85576]: 2025-10-05 08:30:48.863155864 +0000 UTC m=+0.254815932 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, release=1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4) Oct 5 04:30:48 localhost podman[85573]: 2025-10-05 08:30:48.844645328 +0000 UTC m=+0.246594052 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., release=2, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:30:48 localhost podman[85575]: 2025-10-05 08:30:48.918771717 +0000 UTC m=+0.313252601 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid) Oct 5 04:30:48 localhost podman[85573]: 2025-10-05 08:30:48.927114321 +0000 UTC m=+0.329062965 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:04:03, release=2, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, architecture=x86_64) Oct 5 04:30:48 localhost podman[85588]: 2025-10-05 08:30:48.784665017 +0000 UTC m=+0.171892156 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:30:48 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:30:48 localhost podman[85588]: 2025-10-05 08:30:48.967516836 +0000 UTC m=+0.354744065 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:30:48 localhost podman[85575]: 2025-10-05 08:30:48.979806536 +0000 UTC m=+0.374287480 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, version=17.1.9, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:30:48 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:30:48 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:30:49 localhost podman[85576]: 2025-10-05 08:30:49.247922484 +0000 UTC m=+0.639582532 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:30:49 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:30:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:30:55 localhost podman[85682]: 2025-10-05 08:30:55.682656865 +0000 UTC m=+0.082111166 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:30:55 localhost podman[85682]: 2025-10-05 08:30:55.717930971 +0000 UTC m=+0.117385262 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 04:30:55 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:31:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:31:07 localhost podman[85708]: 2025-10-05 08:31:07.676723155 +0000 UTC m=+0.082978959 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_id=tripleo_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:31:07 localhost podman[85708]: 2025-10-05 08:31:07.885931702 +0000 UTC m=+0.292187566 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:31:07 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:31:17 localhost systemd[1]: tmp-crun.f0i173.mount: Deactivated successfully. Oct 5 04:31:17 localhost podman[85738]: 2025-10-05 08:31:17.698610378 +0000 UTC m=+0.101367713 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, tcib_managed=true) Oct 5 04:31:17 localhost podman[85738]: 2025-10-05 08:31:17.729917159 +0000 UTC m=+0.132674494 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:31:17 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:31:17 localhost podman[85737]: 2025-10-05 08:31:17.794604125 +0000 UTC m=+0.198852989 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Oct 5 04:31:17 localhost systemd[1]: tmp-crun.lTsg4s.mount: Deactivated successfully. Oct 5 04:31:17 localhost podman[85739]: 2025-10-05 08:31:17.855329916 +0000 UTC m=+0.253374784 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, vcs-type=git) Oct 5 04:31:17 localhost podman[85737]: 2025-10-05 08:31:17.873783421 +0000 UTC m=+0.278032245 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:31:17 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:31:17 localhost podman[85739]: 2025-10-05 08:31:17.894933299 +0000 UTC m=+0.292978127 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, name=rhosp17/openstack-cron) Oct 5 04:31:17 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:31:19 localhost systemd[1]: tmp-crun.6NR5FJ.mount: Deactivated successfully. Oct 5 04:31:19 localhost podman[85823]: 2025-10-05 08:31:19.746080694 +0000 UTC m=+0.126425975 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_metadata_agent) Oct 5 04:31:19 localhost systemd[1]: tmp-crun.WQ2zl9.mount: Deactivated successfully. Oct 5 04:31:19 localhost podman[85810]: 2025-10-05 08:31:19.767625023 +0000 UTC m=+0.162988377 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:31:19 localhost podman[85817]: 2025-10-05 08:31:19.811741127 +0000 UTC m=+0.198135320 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:31:19 localhost podman[85811]: 2025-10-05 08:31:19.772965916 +0000 UTC m=+0.162947715 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:27:15, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:31:19 localhost podman[85809]: 2025-10-05 08:31:19.878318945 +0000 UTC m=+0.277874832 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, release=2, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Oct 5 04:31:19 localhost podman[85809]: 2025-10-05 08:31:19.890835211 +0000 UTC m=+0.290391088 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, container_name=collectd, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:31:19 localhost podman[85810]: 2025-10-05 08:31:19.900042247 +0000 UTC m=+0.295405641 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:31:19 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:31:19 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:31:19 localhost podman[85823]: 2025-10-05 08:31:19.930509766 +0000 UTC m=+0.310855087 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, container_name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12) Oct 5 04:31:19 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:31:19 localhost podman[85811]: 2025-10-05 08:31:19.955907877 +0000 UTC m=+0.345889736 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 04:31:19 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:31:20 localhost podman[85817]: 2025-10-05 08:31:20.187866315 +0000 UTC m=+0.574260498 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:31:20 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:31:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:31:26 localhost systemd[1]: tmp-crun.uz8vFm.mount: Deactivated successfully. Oct 5 04:31:26 localhost podman[85992]: 2025-10-05 08:31:26.692250816 +0000 UTC m=+0.095294419 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 5 04:31:26 localhost podman[85992]: 2025-10-05 08:31:26.727798291 +0000 UTC m=+0.130841894 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., container_name=nova_compute, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 04:31:26 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:31:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:31:38 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:31:38 localhost recover_tripleo_nova_virtqemud[86025]: 62622 Oct 5 04:31:38 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:31:38 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:31:38 localhost podman[86018]: 2025-10-05 08:31:38.687088256 +0000 UTC m=+0.094207361 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:31:38 localhost podman[86018]: 2025-10-05 08:31:38.88576603 +0000 UTC m=+0.292885105 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, distribution-scope=public) Oct 5 04:31:38 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:31:48 localhost podman[86091]: 2025-10-05 08:31:48.69302646 +0000 UTC m=+0.095307639 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:31:48 localhost systemd[1]: tmp-crun.3oO4nd.mount: Deactivated successfully. Oct 5 04:31:48 localhost podman[86091]: 2025-10-05 08:31:48.752462016 +0000 UTC m=+0.154743145 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:31:48 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:31:48 localhost podman[86093]: 2025-10-05 08:31:48.841014353 +0000 UTC m=+0.238772871 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:31:48 localhost podman[86093]: 2025-10-05 08:31:48.853798246 +0000 UTC m=+0.251556744 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:31:48 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:31:48 localhost podman[86092]: 2025-10-05 08:31:48.757500141 +0000 UTC m=+0.158780464 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 5 04:31:48 localhost podman[86092]: 2025-10-05 08:31:48.93850559 +0000 UTC m=+0.339785933 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:31:48 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:31:50 localhost systemd[1]: tmp-crun.YOmi46.mount: Deactivated successfully. Oct 5 04:31:50 localhost systemd[1]: tmp-crun.kMdWli.mount: Deactivated successfully. Oct 5 04:31:50 localhost podman[86172]: 2025-10-05 08:31:50.770002109 +0000 UTC m=+0.157376446 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1) Oct 5 04:31:50 localhost podman[86163]: 2025-10-05 08:31:50.725625537 +0000 UTC m=+0.119942051 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1) Oct 5 04:31:50 localhost podman[86172]: 2025-10-05 08:31:50.805954414 +0000 UTC m=+0.193328821 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, build-date=2025-07-21T16:28:53, release=1, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:31:50 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:31:50 localhost podman[86161]: 2025-10-05 08:31:50.81771361 +0000 UTC m=+0.218906688 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:31:50 localhost podman[86161]: 2025-10-05 08:31:50.825877989 +0000 UTC m=+0.227071097 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, container_name=collectd, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:31:50 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:31:50 localhost podman[86162]: 2025-10-05 08:31:50.875149181 +0000 UTC m=+0.273967355 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, release=1, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:31:50 localhost podman[86164]: 2025-10-05 08:31:50.922455931 +0000 UTC m=+0.311319309 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=) Oct 5 04:31:50 localhost podman[86162]: 2025-10-05 08:31:50.948968804 +0000 UTC m=+0.347786968 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:31:50 localhost podman[86163]: 2025-10-05 08:31:50.9570447 +0000 UTC m=+0.351361224 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:31:50 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:31:50 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:31:51 localhost podman[86164]: 2025-10-05 08:31:51.318107844 +0000 UTC m=+0.706971262 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, io.openshift.expose-services=) Oct 5 04:31:51 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:31:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:31:57 localhost systemd[1]: tmp-crun.UeInfg.mount: Deactivated successfully. Oct 5 04:31:57 localhost podman[86266]: 2025-10-05 08:31:57.673355018 +0000 UTC m=+0.084810288 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:31:57 localhost podman[86266]: 2025-10-05 08:31:57.707863095 +0000 UTC m=+0.119318405 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 04:31:57 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:32:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:32:09 localhost podman[86291]: 2025-10-05 08:32:09.672839842 +0000 UTC m=+0.079634869 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, container_name=metrics_qdr) Oct 5 04:32:09 localhost podman[86291]: 2025-10-05 08:32:09.870034426 +0000 UTC m=+0.276829413 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:32:09 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:32:19 localhost systemd[1]: tmp-crun.SSggca.mount: Deactivated successfully. Oct 5 04:32:19 localhost podman[86320]: 2025-10-05 08:32:19.720129907 +0000 UTC m=+0.129933309 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:32:19 localhost podman[86322]: 2025-10-05 08:32:19.687701927 +0000 UTC m=+0.092445073 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 5 04:32:19 localhost podman[86322]: 2025-10-05 08:32:19.771823265 +0000 UTC m=+0.176566381 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, container_name=logrotate_crond, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:32:19 localhost podman[86321]: 2025-10-05 08:32:19.781818543 +0000 UTC m=+0.189871749 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:32:19 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:32:19 localhost podman[86320]: 2025-10-05 08:32:19.79510443 +0000 UTC m=+0.204907912 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-07-21T14:45:33, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:32:19 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:32:19 localhost podman[86321]: 2025-10-05 08:32:19.843049947 +0000 UTC m=+0.251103163 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:32:19 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:32:21 localhost systemd[1]: tmp-crun.STMMj2.mount: Deactivated successfully. Oct 5 04:32:21 localhost podman[86407]: 2025-10-05 08:32:21.716467342 +0000 UTC m=+0.113587321 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, version=17.1.9, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:32:21 localhost systemd[1]: tmp-crun.rozlcr.mount: Deactivated successfully. Oct 5 04:32:21 localhost podman[86407]: 2025-10-05 08:32:21.773004159 +0000 UTC m=+0.170124118 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, build-date=2025-07-21T13:28:44, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:32:21 localhost podman[86408]: 2025-10-05 08:32:21.78086283 +0000 UTC m=+0.174000882 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:32:21 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:32:21 localhost podman[86408]: 2025-10-05 08:32:21.792756179 +0000 UTC m=+0.185894271 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, version=17.1.9) Oct 5 04:32:21 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:32:21 localhost podman[86406]: 2025-10-05 08:32:21.819287822 +0000 UTC m=+0.220067099 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:32:21 localhost podman[86406]: 2025-10-05 08:32:21.830797321 +0000 UTC m=+0.231576628 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:32:21 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:32:21 localhost podman[86410]: 2025-10-05 08:32:21.919729168 +0000 UTC m=+0.303921520 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:32:21 localhost podman[86416]: 2025-10-05 08:32:21.974305054 +0000 UTC m=+0.359625547 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1) Oct 5 04:32:22 localhost podman[86416]: 2025-10-05 08:32:22.022777725 +0000 UTC m=+0.408098208 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:32:22 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:32:22 localhost podman[86410]: 2025-10-05 08:32:22.23976011 +0000 UTC m=+0.623952412 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:32:22 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:32:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:32:28 localhost podman[86571]: 2025-10-05 08:32:28.691549599 +0000 UTC m=+0.094885728 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:32:28 localhost podman[86571]: 2025-10-05 08:32:28.754800287 +0000 UTC m=+0.158136386 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:32:28 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:32:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:32:40 localhost systemd[1]: tmp-crun.2gAMP8.mount: Deactivated successfully. Oct 5 04:32:40 localhost podman[86597]: 2025-10-05 08:32:40.693959551 +0000 UTC m=+0.099861362 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git) Oct 5 04:32:40 localhost podman[86597]: 2025-10-05 08:32:40.890796585 +0000 UTC m=+0.296698366 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, version=17.1.9, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Oct 5 04:32:40 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:32:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:32:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:32:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:32:50 localhost systemd[1]: tmp-crun.DXflNf.mount: Deactivated successfully. Oct 5 04:32:50 localhost podman[86671]: 2025-10-05 08:32:50.690151225 +0000 UTC m=+0.086983786 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47) Oct 5 04:32:50 localhost systemd[1]: tmp-crun.SuNz34.mount: Deactivated successfully. Oct 5 04:32:50 localhost podman[86671]: 2025-10-05 08:32:50.740852607 +0000 UTC m=+0.137685178 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Oct 5 04:32:50 localhost podman[86670]: 2025-10-05 08:32:50.753729562 +0000 UTC m=+0.153850041 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:32:50 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:32:50 localhost podman[86670]: 2025-10-05 08:32:50.780901362 +0000 UTC m=+0.181021851 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9) Oct 5 04:32:50 localhost podman[86672]: 2025-10-05 08:32:50.794563869 +0000 UTC m=+0.186248152 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, release=1, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Oct 5 04:32:50 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:32:50 localhost podman[86672]: 2025-10-05 08:32:50.801501585 +0000 UTC m=+0.193185918 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=logrotate_crond, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Oct 5 04:32:50 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:32:52 localhost podman[86746]: 2025-10-05 08:32:52.690483107 +0000 UTC m=+0.093728237 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, release=2, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:32:52 localhost podman[86746]: 2025-10-05 08:32:52.697145336 +0000 UTC m=+0.100390466 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, architecture=x86_64, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:32:52 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:32:52 localhost podman[86755]: 2025-10-05 08:32:52.762568812 +0000 UTC m=+0.144193831 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, vcs-type=git, build-date=2025-07-21T16:28:53, version=17.1.9, config_id=tripleo_step4) Oct 5 04:32:52 localhost podman[86754]: 2025-10-05 08:32:52.7118003 +0000 UTC m=+0.097233472 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute) Oct 5 04:32:52 localhost podman[86753]: 2025-10-05 08:32:52.745384541 +0000 UTC m=+0.136421314 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 04:32:52 localhost podman[86747]: 2025-10-05 08:32:52.806384669 +0000 UTC m=+0.196642020 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:32:52 localhost podman[86755]: 2025-10-05 08:32:52.822262596 +0000 UTC m=+0.203887615 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:32:52 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:32:52 localhost podman[86747]: 2025-10-05 08:32:52.835243213 +0000 UTC m=+0.225500604 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:32:52 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:32:52 localhost podman[86753]: 2025-10-05 08:32:52.879141042 +0000 UTC m=+0.270177815 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, version=17.1.9, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vendor=Red Hat, Inc.) Oct 5 04:32:52 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:32:53 localhost podman[86754]: 2025-10-05 08:32:53.090581058 +0000 UTC m=+0.476014220 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:32:53 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:32:53 localhost systemd[1]: tmp-crun.JDuHmS.mount: Deactivated successfully. Oct 5 04:32:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:32:59 localhost podman[86851]: 2025-10-05 08:32:59.679363856 +0000 UTC m=+0.091752064 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:32:59 localhost podman[86851]: 2025-10-05 08:32:59.734992159 +0000 UTC m=+0.147380377 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 04:32:59 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:33:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:33:11 localhost systemd[1]: tmp-crun.sMxuAI.mount: Deactivated successfully. Oct 5 04:33:11 localhost podman[86877]: 2025-10-05 08:33:11.702931987 +0000 UTC m=+0.105108892 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr) Oct 5 04:33:11 localhost podman[86877]: 2025-10-05 08:33:11.922718058 +0000 UTC m=+0.324894923 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:33:11 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:33:21 localhost podman[86906]: 2025-10-05 08:33:21.695578856 +0000 UTC m=+0.097832417 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:33:21 localhost systemd[1]: tmp-crun.tTmphX.mount: Deactivated successfully. Oct 5 04:33:21 localhost podman[86906]: 2025-10-05 08:33:21.757315254 +0000 UTC m=+0.159568835 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:33:21 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:33:21 localhost podman[86908]: 2025-10-05 08:33:21.760709886 +0000 UTC m=+0.155005883 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:33:21 localhost podman[86908]: 2025-10-05 08:33:21.844922946 +0000 UTC m=+0.239218913 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, container_name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public) Oct 5 04:33:21 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:33:21 localhost podman[86907]: 2025-10-05 08:33:21.90766034 +0000 UTC m=+0.304586148 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Oct 5 04:33:21 localhost podman[86907]: 2025-10-05 08:33:21.941308014 +0000 UTC m=+0.338233842 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Oct 5 04:33:21 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:33:23 localhost systemd[1]: tmp-crun.CRtVei.mount: Deactivated successfully. Oct 5 04:33:23 localhost podman[86993]: 2025-10-05 08:33:23.502587048 +0000 UTC m=+0.115385228 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.33.12, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, container_name=collectd) Oct 5 04:33:23 localhost podman[86993]: 2025-10-05 08:33:23.53542767 +0000 UTC m=+0.148225870 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, version=17.1.9, com.redhat.component=openstack-collectd-container, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:33:23 localhost podman[86994]: 2025-10-05 08:33:23.554099472 +0000 UTC m=+0.163067789 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 04:33:23 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:33:23 localhost podman[87001]: 2025-10-05 08:33:23.600896087 +0000 UTC m=+0.200329808 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 5 04:33:23 localhost podman[87007]: 2025-10-05 08:33:23.518365802 +0000 UTC m=+0.111509375 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:33:23 localhost podman[86995]: 2025-10-05 08:33:23.535683737 +0000 UTC m=+0.141034997 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9) Oct 5 04:33:23 localhost podman[87007]: 2025-10-05 08:33:23.65573368 +0000 UTC m=+0.248877253 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true) Oct 5 04:33:23 localhost podman[86995]: 2025-10-05 08:33:23.669836879 +0000 UTC m=+0.275188179 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15) Oct 5 04:33:23 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:33:23 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:33:23 localhost podman[86994]: 2025-10-05 08:33:23.726313765 +0000 UTC m=+0.335282082 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:33:23 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:33:24 localhost podman[87001]: 2025-10-05 08:33:24.002994983 +0000 UTC m=+0.602428734 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, container_name=nova_migration_target, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1) Oct 5 04:33:24 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:33:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:33:30 localhost systemd[1]: tmp-crun.fVnKvo.mount: Deactivated successfully. Oct 5 04:33:30 localhost podman[87160]: 2025-10-05 08:33:30.700650522 +0000 UTC m=+0.104023504 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git) Oct 5 04:33:30 localhost podman[87160]: 2025-10-05 08:33:30.732068235 +0000 UTC m=+0.135441157 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, config_id=tripleo_step5, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team) Oct 5 04:33:30 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:33:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:33:40 localhost recover_tripleo_nova_virtqemud[87186]: 62622 Oct 5 04:33:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:33:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:33:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:33:42 localhost podman[87187]: 2025-10-05 08:33:42.701122903 +0000 UTC m=+0.104351303 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20250721.1) Oct 5 04:33:42 localhost podman[87187]: 2025-10-05 08:33:42.897372062 +0000 UTC m=+0.300600482 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, version=17.1.9, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:33:42 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:33:52 localhost podman[87259]: 2025-10-05 08:33:52.684249825 +0000 UTC m=+0.098516516 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:33:52 localhost podman[87259]: 2025-10-05 08:33:52.715926076 +0000 UTC m=+0.130192757 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, version=17.1.9, architecture=x86_64) Oct 5 04:33:52 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:33:52 localhost podman[87260]: 2025-10-05 08:33:52.736014175 +0000 UTC m=+0.142443775 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git) Oct 5 04:33:52 localhost podman[87260]: 2025-10-05 08:33:52.766966656 +0000 UTC m=+0.173396276 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 04:33:52 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:33:52 localhost systemd[1]: tmp-crun.8gmyz8.mount: Deactivated successfully. Oct 5 04:33:52 localhost podman[87261]: 2025-10-05 08:33:52.878450829 +0000 UTC m=+0.282161826 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:33:52 localhost podman[87261]: 2025-10-05 08:33:52.889765773 +0000 UTC m=+0.293476740 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, container_name=logrotate_crond, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, release=1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4) Oct 5 04:33:52 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:33:54 localhost systemd[1]: tmp-crun.cVCfj7.mount: Deactivated successfully. Oct 5 04:33:54 localhost podman[87328]: 2025-10-05 08:33:54.698471519 +0000 UTC m=+0.104770383 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, version=17.1.9, name=rhosp17/openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:33:54 localhost podman[87331]: 2025-10-05 08:33:54.7093007 +0000 UTC m=+0.101431244 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 5 04:33:54 localhost podman[87329]: 2025-10-05 08:33:54.733441968 +0000 UTC m=+0.138123339 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, release=1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 04:33:54 localhost podman[87342]: 2025-10-05 08:33:54.80169023 +0000 UTC m=+0.194074181 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 5 04:33:54 localhost podman[87329]: 2025-10-05 08:33:54.812328276 +0000 UTC m=+0.217009627 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:33:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:33:54 localhost podman[87330]: 2025-10-05 08:33:54.772727103 +0000 UTC m=+0.166810860 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:33:54 localhost podman[87328]: 2025-10-05 08:33:54.836620698 +0000 UTC m=+0.242919592 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 04:33:54 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:33:54 localhost podman[87330]: 2025-10-05 08:33:54.852271458 +0000 UTC m=+0.246355275 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 5 04:33:54 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:33:54 localhost podman[87342]: 2025-10-05 08:33:54.885222683 +0000 UTC m=+0.277606594 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:33:54 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:33:55 localhost podman[87331]: 2025-10-05 08:33:55.066446018 +0000 UTC m=+0.458576582 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible) Oct 5 04:33:55 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:34:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:34:01 localhost systemd[1]: tmp-crun.ypF0X9.mount: Deactivated successfully. Oct 5 04:34:01 localhost podman[87434]: 2025-10-05 08:34:01.67772537 +0000 UTC m=+0.082473136 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-07-21T14:48:37, container_name=nova_compute, release=1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:34:01 localhost podman[87434]: 2025-10-05 08:34:01.706255725 +0000 UTC m=+0.111003491 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, release=1, vendor=Red Hat, Inc., config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:34:01 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:34:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:34:13 localhost podman[87460]: 2025-10-05 08:34:13.683559833 +0000 UTC m=+0.090136511 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=metrics_qdr, release=1) Oct 5 04:34:13 localhost podman[87460]: 2025-10-05 08:34:13.904843214 +0000 UTC m=+0.311419882 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 5 04:34:13 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:34:23 localhost systemd[1]: tmp-crun.OmygKZ.mount: Deactivated successfully. Oct 5 04:34:23 localhost podman[87491]: 2025-10-05 08:34:23.692433156 +0000 UTC m=+0.097992112 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:34:23 localhost podman[87492]: 2025-10-05 08:34:23.751344987 +0000 UTC m=+0.152146165 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, container_name=ceilometer_agent_ipmi, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12) Oct 5 04:34:23 localhost podman[87491]: 2025-10-05 08:34:23.769170946 +0000 UTC m=+0.174729912 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:34:23 localhost podman[87492]: 2025-10-05 08:34:23.77974402 +0000 UTC m=+0.180545208 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 04:34:23 localhost podman[87493]: 2025-10-05 08:34:23.791236068 +0000 UTC m=+0.187090113 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:34:23 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:34:23 localhost podman[87493]: 2025-10-05 08:34:23.805128141 +0000 UTC m=+0.200982226 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:34:23 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:34:23 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:34:24 localhost systemd[1]: tmp-crun.zMsQpH.mount: Deactivated successfully. Oct 5 04:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:34:25 localhost systemd[1]: tmp-crun.OTebW2.mount: Deactivated successfully. Oct 5 04:34:25 localhost podman[87574]: 2025-10-05 08:34:25.074465258 +0000 UTC m=+0.149641898 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:34:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:34:25 localhost podman[87575]: 2025-10-05 08:34:25.125402856 +0000 UTC m=+0.197250757 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., release=1, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid) Oct 5 04:34:25 localhost podman[87573]: 2025-10-05 08:34:25.04250368 +0000 UTC m=+0.122386286 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, release=2, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, vcs-type=git, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:34:25 localhost podman[87574]: 2025-10-05 08:34:25.154752794 +0000 UTC m=+0.229929394 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, version=17.1.9, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:34:25 localhost podman[87575]: 2025-10-05 08:34:25.164904737 +0000 UTC m=+0.236752658 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:34:25 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:34:25 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:34:25 localhost podman[87573]: 2025-10-05 08:34:25.180947757 +0000 UTC m=+0.260830353 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, distribution-scope=public, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=2, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Oct 5 04:34:25 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:34:25 localhost podman[87654]: 2025-10-05 08:34:25.269604087 +0000 UTC m=+0.135150239 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:34:25 localhost podman[87582]: 2025-10-05 08:34:25.2302238 +0000 UTC m=+0.295701429 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, version=17.1.9, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:34:25 localhost podman[87582]: 2025-10-05 08:34:25.313081485 +0000 UTC m=+0.378559114 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T16:28:53, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git) Oct 5 04:34:25 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:34:25 localhost podman[87654]: 2025-10-05 08:34:25.644518883 +0000 UTC m=+0.510065055 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team) Oct 5 04:34:25 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:34:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:34:32 localhost podman[87742]: 2025-10-05 08:34:32.684724587 +0000 UTC m=+0.084690815 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:34:32 localhost podman[87742]: 2025-10-05 08:34:32.718913424 +0000 UTC m=+0.118879642 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 5 04:34:32 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:34:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:34:44 localhost podman[87766]: 2025-10-05 08:34:44.682479746 +0000 UTC m=+0.091902819 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:34:44 localhost podman[87766]: 2025-10-05 08:34:44.904994049 +0000 UTC m=+0.314417082 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:34:44 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:34:54 localhost podman[87839]: 2025-10-05 08:34:54.68535723 +0000 UTC m=+0.095272409 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, tcib_managed=true, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:34:54 localhost systemd[1]: tmp-crun.V28c9y.mount: Deactivated successfully. Oct 5 04:34:54 localhost podman[87840]: 2025-10-05 08:34:54.732537355 +0000 UTC m=+0.135762535 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, release=1, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:34:54 localhost podman[87846]: 2025-10-05 08:34:54.758185225 +0000 UTC m=+0.156060291 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vcs-type=git, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 04:34:54 localhost podman[87846]: 2025-10-05 08:34:54.765617034 +0000 UTC m=+0.163492040 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true) Oct 5 04:34:54 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:34:54 localhost podman[87840]: 2025-10-05 08:34:54.783515325 +0000 UTC m=+0.186740565 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, managed_by=tripleo_ansible) Oct 5 04:34:54 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:34:54 localhost podman[87839]: 2025-10-05 08:34:54.818566145 +0000 UTC m=+0.228481354 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:34:54 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:34:55 localhost systemd[1]: tmp-crun.eUNuaC.mount: Deactivated successfully. Oct 5 04:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:34:55 localhost podman[87911]: 2025-10-05 08:34:55.716739858 +0000 UTC m=+0.121419972 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:34:55 localhost podman[87911]: 2025-10-05 08:34:55.728677798 +0000 UTC m=+0.133357932 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid) Oct 5 04:34:55 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87910]: 2025-10-05 08:34:55.790129078 +0000 UTC m=+0.196959649 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, version=17.1.9, release=1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git) Oct 5 04:34:55 localhost podman[87947]: 2025-10-05 08:34:55.820924994 +0000 UTC m=+0.115920692 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:34:55 localhost podman[87910]: 2025-10-05 08:34:55.833687576 +0000 UTC m=+0.240518137 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1, architecture=x86_64, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible) Oct 5 04:34:55 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87909]: 2025-10-05 08:34:55.881877241 +0000 UTC m=+0.288654560 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd) Oct 5 04:34:55 localhost podman[87909]: 2025-10-05 08:34:55.892820615 +0000 UTC m=+0.299597964 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:34:55 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87912]: 2025-10-05 08:34:55.938923632 +0000 UTC m=+0.336605267 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, architecture=x86_64, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:34:55 localhost podman[87912]: 2025-10-05 08:34:55.987561437 +0000 UTC m=+0.385243112 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:34:56 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:34:56 localhost podman[87947]: 2025-10-05 08:34:56.178078362 +0000 UTC m=+0.473074130 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, release=1) Oct 5 04:34:56 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:34:56 localhost systemd[1]: tmp-crun.TJuVpm.mount: Deactivated successfully. Oct 5 04:35:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:35:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 600 writes, 2346 keys, 600 commit groups, 1.0 writes per commit group, ingest: 2.65 MB, 0.00 MB/s#012Interval WAL: 600 writes, 219 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:35:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:35:03 localhost recover_tripleo_nova_virtqemud[88020]: 62622 Oct 5 04:35:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:35:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:35:03 localhost podman[88017]: 2025-10-05 08:35:03.680850938 +0000 UTC m=+0.090312617 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, release=1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:35:03 localhost podman[88017]: 2025-10-05 08:35:03.70626325 +0000 UTC m=+0.115724949 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12) Oct 5 04:35:03 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:35:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:35:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5427 writes, 24K keys, 5427 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5427 writes, 711 syncs, 7.63 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 368 writes, 1490 keys, 368 commit groups, 1.0 writes per commit group, ingest: 2.20 MB, 0.00 MB/s#012Interval WAL: 368 writes, 131 syncs, 2.81 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:35:11 localhost sshd[88045]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:35:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:35:15 localhost systemd[84940]: Created slice User Background Tasks Slice. Oct 5 04:35:15 localhost systemd[84940]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 04:35:15 localhost podman[88047]: 2025-10-05 08:35:15.718006889 +0000 UTC m=+0.114322288 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, container_name=metrics_qdr, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, tcib_managed=true) Oct 5 04:35:15 localhost systemd[84940]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 04:35:15 localhost podman[88047]: 2025-10-05 08:35:15.911197435 +0000 UTC m=+0.307512784 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:35:15 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:35:25 localhost podman[88077]: 2025-10-05 08:35:25.697353575 +0000 UTC m=+0.097652253 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, release=1, tcib_managed=true, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:35:25 localhost podman[88079]: 2025-10-05 08:35:25.747605729 +0000 UTC m=+0.139852111 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 04:35:25 localhost podman[88079]: 2025-10-05 08:35:25.755068318 +0000 UTC m=+0.147314690 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, release=1, architecture=x86_64) Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:35:25 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:35:25 localhost podman[88077]: 2025-10-05 08:35:25.805840297 +0000 UTC m=+0.206138975 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 5 04:35:25 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:35:25 localhost podman[88078]: 2025-10-05 08:35:25.809109884 +0000 UTC m=+0.205309462 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47) Oct 5 04:35:25 localhost podman[88131]: 2025-10-05 08:35:25.863807766 +0000 UTC m=+0.077127113 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public) Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:35:25 localhost podman[88078]: 2025-10-05 08:35:25.894085376 +0000 UTC m=+0.290284904 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:35:25 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:35:26 localhost podman[88131]: 2025-10-05 08:35:26.000702678 +0000 UTC m=+0.214022005 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:35:26 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88178]: 2025-10-05 08:35:26.029449416 +0000 UTC m=+0.093699156 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=collectd, version=17.1.9, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:35:26 localhost podman[88167]: 2025-10-05 08:35:25.983478807 +0000 UTC m=+0.086733290 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Oct 5 04:35:26 localhost podman[88178]: 2025-10-05 08:35:26.04115544 +0000 UTC m=+0.105405180 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=collectd, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9) Oct 5 04:35:26 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88167]: 2025-10-05 08:35:26.067900565 +0000 UTC m=+0.171155068 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ovn-controller, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:35:26 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88207]: 2025-10-05 08:35:26.140275151 +0000 UTC m=+0.100931171 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53) Oct 5 04:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:35:26 localhost podman[88207]: 2025-10-05 08:35:26.220868636 +0000 UTC m=+0.181524666 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent) Oct 5 04:35:26 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88234]: 2025-10-05 08:35:26.306489765 +0000 UTC m=+0.086351400 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-compute) Oct 5 04:35:26 localhost systemd[1]: tmp-crun.nJaOSp.mount: Deactivated successfully. Oct 5 04:35:26 localhost podman[88234]: 2025-10-05 08:35:26.694850822 +0000 UTC m=+0.474712487 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:35:26 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:35:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:35:34 localhost systemd[1]: tmp-crun.jno2h2.mount: Deactivated successfully. Oct 5 04:35:34 localhost podman[88335]: 2025-10-05 08:35:34.699670501 +0000 UTC m=+0.101198938 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1) Oct 5 04:35:34 localhost podman[88335]: 2025-10-05 08:35:34.734818041 +0000 UTC m=+0.136346468 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 04:35:34 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:35:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:35:46 localhost systemd[1]: tmp-crun.absECc.mount: Deactivated successfully. Oct 5 04:35:46 localhost podman[88362]: 2025-10-05 08:35:46.702646036 +0000 UTC m=+0.105717669 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:35:46 localhost podman[88362]: 2025-10-05 08:35:46.935893034 +0000 UTC m=+0.338964707 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:35:46 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:35:56 localhost podman[88451]: 2025-10-05 08:35:56.717560383 +0000 UTC m=+0.087026118 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=logrotate_crond, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-type=git, release=1, distribution-scope=public, batch=17.1_20250721.1) Oct 5 04:35:56 localhost podman[88444]: 2025-10-05 08:35:56.777599109 +0000 UTC m=+0.160769651 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:35:56 localhost podman[88450]: 2025-10-05 08:35:56.823577009 +0000 UTC m=+0.199629550 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:35:56 localhost podman[88450]: 2025-10-05 08:35:56.851177907 +0000 UTC m=+0.227230418 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:35:56 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:35:56 localhost podman[88519]: 2025-10-05 08:35:56.872351163 +0000 UTC m=+0.135062563 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, version=17.1.9) Oct 5 04:35:56 localhost podman[88462]: 2025-10-05 08:35:56.74249547 +0000 UTC m=+0.109755986 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 04:35:56 localhost podman[88462]: 2025-10-05 08:35:56.923299575 +0000 UTC m=+0.290560131 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:35:56 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:35:56 localhost podman[88438]: 2025-10-05 08:35:56.973450366 +0000 UTC m=+0.357937502 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, vcs-type=git, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public) Oct 5 04:35:56 localhost podman[88444]: 2025-10-05 08:35:56.977132906 +0000 UTC m=+0.360303458 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9) Oct 5 04:35:56 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88436]: 2025-10-05 08:35:57.026242689 +0000 UTC m=+0.421576476 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, release=2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public) Oct 5 04:35:57 localhost podman[88437]: 2025-10-05 08:35:56.93019749 +0000 UTC m=+0.321057398 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, release=1, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 5 04:35:57 localhost podman[88436]: 2025-10-05 08:35:57.034167911 +0000 UTC m=+0.429501738 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, release=2, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64) Oct 5 04:35:57 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88451]: 2025-10-05 08:35:57.051248997 +0000 UTC m=+0.420714722 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:35:57 localhost podman[88438]: 2025-10-05 08:35:57.057522515 +0000 UTC m=+0.442009711 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:35:57 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:35:57 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88437]: 2025-10-05 08:35:57.114699465 +0000 UTC m=+0.505559373 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller) Oct 5 04:35:57 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88519]: 2025-10-05 08:35:57.234141398 +0000 UTC m=+0.496852748 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 5 04:35:57 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:36:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:36:05 localhost podman[88614]: 2025-10-05 08:36:05.695207769 +0000 UTC m=+0.097699184 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Oct 5 04:36:05 localhost podman[88614]: 2025-10-05 08:36:05.75023091 +0000 UTC m=+0.152722325 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-nova-compute-container, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute) Oct 5 04:36:05 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:36:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:36:10 localhost recover_tripleo_nova_virtqemud[88642]: 62622 Oct 5 04:36:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:36:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:36:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:36:17 localhost sshd[88644]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:36:17 localhost podman[88643]: 2025-10-05 08:36:17.687440708 +0000 UTC m=+0.097367115 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:36:17 localhost podman[88643]: 2025-10-05 08:36:17.884241121 +0000 UTC m=+0.294167488 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, release=1) Oct 5 04:36:17 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:36:27 localhost systemd[1]: tmp-crun.VUZUHi.mount: Deactivated successfully. Oct 5 04:36:27 localhost podman[88700]: 2025-10-05 08:36:27.740252827 +0000 UTC m=+0.112134410 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, release=1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:36:27 localhost podman[88700]: 2025-10-05 08:36:27.776798584 +0000 UTC m=+0.148680177 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, architecture=x86_64, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Oct 5 04:36:27 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:36:27 localhost podman[88674]: 2025-10-05 08:36:27.797247381 +0000 UTC m=+0.190191798 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:36:27 localhost podman[88693]: 2025-10-05 08:36:27.758083244 +0000 UTC m=+0.120751971 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:36:27 localhost podman[88674]: 2025-10-05 08:36:27.876107161 +0000 UTC m=+0.269051588 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git) Oct 5 04:36:27 localhost podman[88673]: 2025-10-05 08:36:27.884582257 +0000 UTC m=+0.283667228 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, release=2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:36:27 localhost podman[88693]: 2025-10-05 08:36:27.888205754 +0000 UTC m=+0.250874471 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:36:27 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:36:27 localhost podman[88707]: 2025-10-05 08:36:27.886693913 +0000 UTC m=+0.252392941 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, release=1, build-date=2025-07-21T16:28:53, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.9, config_id=tripleo_step4) Oct 5 04:36:27 localhost podman[88673]: 2025-10-05 08:36:27.89963965 +0000 UTC m=+0.298724601 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, name=rhosp17/openstack-collectd, release=2, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-collectd-container, container_name=collectd, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:36:27 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:36:27 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:36:27 localhost podman[88675]: 2025-10-05 08:36:27.71118775 +0000 UTC m=+0.100088998 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:36:27 localhost podman[88685]: 2025-10-05 08:36:27.840628592 +0000 UTC m=+0.222632246 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:36:27 localhost podman[88675]: 2025-10-05 08:36:27.945753272 +0000 UTC m=+0.334654490 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:36:27 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:36:27 localhost podman[88681]: 2025-10-05 08:36:27.962622744 +0000 UTC m=+0.349299202 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, version=17.1.9) Oct 5 04:36:27 localhost podman[88681]: 2025-10-05 08:36:27.981784536 +0000 UTC m=+0.368461004 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-type=git) Oct 5 04:36:27 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88707]: 2025-10-05 08:36:28.023893212 +0000 UTC m=+0.389592290 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.33.12) Oct 5 04:36:28 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88685]: 2025-10-05 08:36:28.23578481 +0000 UTC m=+0.617788464 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 04:36:28 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:36:28 localhost systemd[1]: tmp-crun.d2x0AJ.mount: Deactivated successfully. Oct 5 04:36:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:36:36 localhost podman[88984]: 2025-10-05 08:36:36.693566614 +0000 UTC m=+0.092946007 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step5, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:36:36 localhost podman[88984]: 2025-10-05 08:36:36.729559577 +0000 UTC m=+0.128938970 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1) Oct 5 04:36:36 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:36:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:36:48 localhost podman[89010]: 2025-10-05 08:36:48.700208019 +0000 UTC m=+0.103951001 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:36:48 localhost podman[89010]: 2025-10-05 08:36:48.929014828 +0000 UTC m=+0.332757820 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, batch=17.1_20250721.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:36:48 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:36:58 localhost systemd[1]: tmp-crun.KIp7sd.mount: Deactivated successfully. Oct 5 04:36:58 localhost podman[89085]: 2025-10-05 08:36:58.717450408 +0000 UTC m=+0.119902408 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, version=17.1.9, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:36:58 localhost podman[89116]: 2025-10-05 08:36:58.772773137 +0000 UTC m=+0.146167410 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:36:58 localhost podman[89086]: 2025-10-05 08:36:58.75378822 +0000 UTC m=+0.154204836 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, release=1, tcib_managed=true) Oct 5 04:36:58 localhost podman[89087]: 2025-10-05 08:36:58.824427138 +0000 UTC m=+0.220374235 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=iscsid, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:36:58 localhost podman[89085]: 2025-10-05 08:36:58.850584628 +0000 UTC m=+0.253036638 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, container_name=collectd, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:36:58 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:36:58 localhost podman[89099]: 2025-10-05 08:36:58.869043582 +0000 UTC m=+0.254791065 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, vcs-type=git, container_name=nova_migration_target, release=1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible) Oct 5 04:36:58 localhost podman[89116]: 2025-10-05 08:36:58.88468423 +0000 UTC m=+0.258078543 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:36:58 localhost podman[89086]: 2025-10-05 08:36:58.893009513 +0000 UTC m=+0.293426169 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:36:58 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:36:58 localhost podman[89087]: 2025-10-05 08:36:58.907806359 +0000 UTC m=+0.303753456 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:36:58 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:36:58 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:36:58 localhost podman[89119]: 2025-10-05 08:36:58.734882834 +0000 UTC m=+0.103838478 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, distribution-scope=public) Oct 5 04:36:58 localhost podman[89119]: 2025-10-05 08:36:58.970782012 +0000 UTC m=+0.339737676 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, release=1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:36:58 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:36:58 localhost podman[89105]: 2025-10-05 08:36:58.982965279 +0000 UTC m=+0.361235992 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public) Oct 5 04:36:59 localhost podman[89105]: 2025-10-05 08:36:59.008419859 +0000 UTC m=+0.386690522 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=ceilometer_agent_ipmi) Oct 5 04:36:59 localhost podman[89092]: 2025-10-05 08:36:58.805702937 +0000 UTC m=+0.196376722 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, container_name=ceilometer_agent_compute) Oct 5 04:36:59 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89092]: 2025-10-05 08:36:59.09372176 +0000 UTC m=+0.484395545 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, build-date=2025-07-21T14:45:33, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:36:59 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89099]: 2025-10-05 08:36:59.304867488 +0000 UTC m=+0.690615021 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 04:36:59 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:37:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:37:07 localhost systemd[1]: tmp-crun.Hns4ZS.mount: Deactivated successfully. Oct 5 04:37:07 localhost podman[89267]: 2025-10-05 08:37:07.677596827 +0000 UTC m=+0.086240366 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, container_name=nova_compute) Oct 5 04:37:07 localhost podman[89267]: 2025-10-05 08:37:07.738904847 +0000 UTC m=+0.147548336 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:48:37) Oct 5 04:37:07 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:37:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:37:19 localhost podman[89295]: 2025-10-05 08:37:19.679048891 +0000 UTC m=+0.089033671 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true) Oct 5 04:37:19 localhost podman[89295]: 2025-10-05 08:37:19.908859477 +0000 UTC m=+0.318844197 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:37:19 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:37:29 localhost systemd[1]: tmp-crun.Du9DfU.mount: Deactivated successfully. Oct 5 04:37:29 localhost podman[89324]: 2025-10-05 08:37:29.713428358 +0000 UTC m=+0.114708459 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Oct 5 04:37:29 localhost podman[89345]: 2025-10-05 08:37:29.72436121 +0000 UTC m=+0.088689402 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, container_name=logrotate_crond, name=rhosp17/openstack-cron, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, release=1, vendor=Red Hat, Inc.) Oct 5 04:37:29 localhost podman[89333]: 2025-10-05 08:37:29.837800685 +0000 UTC m=+0.223747655 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Oct 5 04:37:29 localhost podman[89324]: 2025-10-05 08:37:29.857025009 +0000 UTC m=+0.258305180 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, release=2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=collectd, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd) Oct 5 04:37:29 localhost podman[89326]: 2025-10-05 08:37:29.758493733 +0000 UTC m=+0.155164750 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9) Oct 5 04:37:29 localhost podman[89327]: 2025-10-05 08:37:29.810471844 +0000 UTC m=+0.201498130 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:37:29 localhost podman[89325]: 2025-10-05 08:37:29.778375466 +0000 UTC m=+0.176863252 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, release=1) Oct 5 04:37:29 localhost podman[89326]: 2025-10-05 08:37:29.889661321 +0000 UTC m=+0.286332318 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, vcs-type=git, container_name=iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:37:29 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:37:29 localhost podman[89325]: 2025-10-05 08:37:29.909866491 +0000 UTC m=+0.308354277 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:37:29 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:37:29 localhost podman[89327]: 2025-10-05 08:37:29.944618331 +0000 UTC m=+0.335644667 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:37:29 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:37:29 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89339]: 2025-10-05 08:37:30.051769057 +0000 UTC m=+0.426455686 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.9) Oct 5 04:37:30 localhost podman[89345]: 2025-10-05 08:37:30.076268372 +0000 UTC m=+0.440596564 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52) Oct 5 04:37:30 localhost podman[89351]: 2025-10-05 08:37:30.09078475 +0000 UTC m=+0.464523734 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:37:30 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89339]: 2025-10-05 08:37:30.116823226 +0000 UTC m=+0.491509865 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi) Oct 5 04:37:30 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89351]: 2025-10-05 08:37:30.172927877 +0000 UTC m=+0.546666851 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:37:30 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89333]: 2025-10-05 08:37:30.218902547 +0000 UTC m=+0.604849547 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=nova_migration_target, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:37:30 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:37:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:37:38 localhost systemd[1]: tmp-crun.E4YGmR.mount: Deactivated successfully. Oct 5 04:37:38 localhost podman[89576]: 2025-10-05 08:37:38.701351859 +0000 UTC m=+0.105020090 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, container_name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, release=1, build-date=2025-07-21T14:48:37, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 5 04:37:38 localhost podman[89576]: 2025-10-05 08:37:38.730850428 +0000 UTC m=+0.134518659 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 04:37:38 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:37:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:37:50 localhost systemd[1]: tmp-crun.sxc0ho.mount: Deactivated successfully. Oct 5 04:37:50 localhost podman[89625]: 2025-10-05 08:37:50.689821189 +0000 UTC m=+0.100607892 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=metrics_qdr, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:37:50 localhost podman[89625]: 2025-10-05 08:37:50.927025073 +0000 UTC m=+0.337811796 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Oct 5 04:37:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:38:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:38:00 localhost recover_tripleo_nova_virtqemud[89706]: 62622 Oct 5 04:38:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:38:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:38:00 localhost systemd[1]: tmp-crun.Lkli4b.mount: Deactivated successfully. Oct 5 04:38:00 localhost podman[89679]: 2025-10-05 08:38:00.744040365 +0000 UTC m=+0.115297874 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:38:00 localhost podman[89679]: 2025-10-05 08:38:00.776831242 +0000 UTC m=+0.148088781 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, release=1) Oct 5 04:38:00 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:38:00 localhost podman[89674]: 2025-10-05 08:38:00.792672746 +0000 UTC m=+0.166127154 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:38:00 localhost podman[89654]: 2025-10-05 08:38:00.778304041 +0000 UTC m=+0.174816736 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, vcs-type=git, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:38:00 localhost podman[89663]: 2025-10-05 08:38:00.85002766 +0000 UTC m=+0.232117189 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, container_name=nova_migration_target, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1) Oct 5 04:38:00 localhost podman[89654]: 2025-10-05 08:38:00.862759141 +0000 UTC m=+0.259271906 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, io.openshift.expose-services=) Oct 5 04:38:00 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:38:00 localhost podman[89657]: 2025-10-05 08:38:00.916034395 +0000 UTC m=+0.299518651 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, container_name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:00 localhost podman[89657]: 2025-10-05 08:38:00.949748927 +0000 UTC m=+0.333233213 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible) Oct 5 04:38:00 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:38:00 localhost podman[89674]: 2025-10-05 08:38:00.969596078 +0000 UTC m=+0.343050496 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, tcib_managed=true, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:38:00 localhost podman[89655]: 2025-10-05 08:38:00.717989779 +0000 UTC m=+0.111196335 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Oct 5 04:38:00 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89655]: 2025-10-05 08:38:01.008913459 +0000 UTC m=+0.402120045 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:01 localhost podman[89686]: 2025-10-05 08:38:00.956309752 +0000 UTC m=+0.315278183 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:38:01 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89686]: 2025-10-05 08:38:01.089775122 +0000 UTC m=+0.448743483 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:38:01 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89656]: 2025-10-05 08:38:01.060694104 +0000 UTC m=+0.451109876 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid) Oct 5 04:38:01 localhost podman[89656]: 2025-10-05 08:38:01.141985908 +0000 UTC m=+0.532401660 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, release=1, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, distribution-scope=public) Oct 5 04:38:01 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89663]: 2025-10-05 08:38:01.193718912 +0000 UTC m=+0.575808411 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 04:38:01 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:38:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:38:09 localhost podman[89833]: 2025-10-05 08:38:09.67261275 +0000 UTC m=+0.084148822 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9) Oct 5 04:38:09 localhost podman[89833]: 2025-10-05 08:38:09.705769377 +0000 UTC m=+0.117305469 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:38:09 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:38:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:38:21 localhost podman[89859]: 2025-10-05 08:38:21.681669149 +0000 UTC m=+0.092485715 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, vcs-type=git, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, release=1, version=17.1.9, io.buildah.version=1.33.12) Oct 5 04:38:21 localhost podman[89859]: 2025-10-05 08:38:21.903999395 +0000 UTC m=+0.314815911 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, release=1, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:38:21 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:38:31 localhost systemd[1]: tmp-crun.V240q3.mount: Deactivated successfully. Oct 5 04:38:31 localhost podman[89890]: 2025-10-05 08:38:31.706546984 +0000 UTC m=+0.105060441 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, release=1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:38:31 localhost systemd[1]: tmp-crun.uSBnPv.mount: Deactivated successfully. Oct 5 04:38:31 localhost podman[89891]: 2025-10-05 08:38:31.72844554 +0000 UTC m=+0.112478640 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:38:31 localhost podman[89889]: 2025-10-05 08:38:31.748812204 +0000 UTC m=+0.149872169 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, release=1) Oct 5 04:38:31 localhost podman[89889]: 2025-10-05 08:38:31.824408336 +0000 UTC m=+0.225468291 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, container_name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:38:31 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:38:31 localhost podman[89890]: 2025-10-05 08:38:31.848109429 +0000 UTC m=+0.246622936 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15) Oct 5 04:38:31 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:38:31 localhost podman[89897]: 2025-10-05 08:38:31.80957881 +0000 UTC m=+0.196658981 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 04:38:31 localhost podman[89891]: 2025-10-05 08:38:31.868825294 +0000 UTC m=+0.252858394 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12) Oct 5 04:38:31 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:38:31 localhost podman[89910]: 2025-10-05 08:38:31.917318001 +0000 UTC m=+0.299608284 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, container_name=logrotate_crond, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:38:31 localhost podman[89911]: 2025-10-05 08:38:31.92511988 +0000 UTC m=+0.295690920 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, config_id=tripleo_step4) Oct 5 04:38:31 localhost podman[89910]: 2025-10-05 08:38:31.928012747 +0000 UTC m=+0.310303040 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container) Oct 5 04:38:31 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:38:31 localhost podman[89904]: 2025-10-05 08:38:31.967523873 +0000 UTC m=+0.344925635 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:38:31 localhost podman[89911]: 2025-10-05 08:38:31.97077156 +0000 UTC m=+0.341342640 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:38:31 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:38:31 localhost podman[89888]: 2025-10-05 08:38:31.831550366 +0000 UTC m=+0.235499259 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, release=2, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=) Oct 5 04:38:32 localhost podman[89904]: 2025-10-05 08:38:32.002884609 +0000 UTC m=+0.380286381 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi) Oct 5 04:38:32 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89888]: 2025-10-05 08:38:32.016424411 +0000 UTC m=+0.420373294 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., container_name=collectd) Oct 5 04:38:32 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89897]: 2025-10-05 08:38:32.193842896 +0000 UTC m=+0.580923027 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:38:32 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:38:35 localhost podman[90162]: 2025-10-05 08:38:35.266010438 +0000 UTC m=+0.104181687 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, version=7, release=553, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Oct 5 04:38:35 localhost podman[90162]: 2025-10-05 08:38:35.364266267 +0000 UTC m=+0.202437496 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Oct 5 04:38:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:38:40 localhost podman[90304]: 2025-10-05 08:38:40.680263966 +0000 UTC m=+0.086958986 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_id=tripleo_step5) Oct 5 04:38:40 localhost podman[90304]: 2025-10-05 08:38:40.713055643 +0000 UTC m=+0.119750693 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:38:40 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:38:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:38:52 localhost podman[90352]: 2025-10-05 08:38:52.712990048 +0000 UTC m=+0.118408607 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:38:52 localhost podman[90352]: 2025-10-05 08:38:52.901707366 +0000 UTC m=+0.307125935 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:38:52 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:39:02 localhost systemd[1]: tmp-crun.vN9cI0.mount: Deactivated successfully. Oct 5 04:39:02 localhost systemd[1]: tmp-crun.gmAwvG.mount: Deactivated successfully. Oct 5 04:39:02 localhost podman[90383]: 2025-10-05 08:39:02.768209476 +0000 UTC m=+0.141421183 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9) Oct 5 04:39:02 localhost podman[90381]: 2025-10-05 08:39:02.771485034 +0000 UTC m=+0.163857824 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc.) Oct 5 04:39:02 localhost podman[90383]: 2025-10-05 08:39:02.778782549 +0000 UTC m=+0.151994236 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, container_name=iscsid, release=1, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Oct 5 04:39:02 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:39:02 localhost podman[90382]: 2025-10-05 08:39:02.825356324 +0000 UTC m=+0.213782798 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, managed_by=tripleo_ansible) Oct 5 04:39:02 localhost podman[90413]: 2025-10-05 08:39:02.779604731 +0000 UTC m=+0.139482182 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:39:02 localhost podman[90382]: 2025-10-05 08:39:02.846666884 +0000 UTC m=+0.235093338 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:39:02 localhost podman[90413]: 2025-10-05 08:39:02.862710024 +0000 UTC m=+0.222587485 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, release=1, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53) Oct 5 04:39:02 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:39:02 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:39:02 localhost podman[90396]: 2025-10-05 08:39:02.932287544 +0000 UTC m=+0.301694459 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9) Oct 5 04:39:02 localhost podman[90395]: 2025-10-05 08:39:02.979602179 +0000 UTC m=+0.342050639 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:39:02 localhost podman[90396]: 2025-10-05 08:39:02.987802129 +0000 UTC m=+0.357209104 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 5 04:39:02 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90389]: 2025-10-05 08:39:03.034457097 +0000 UTC m=+0.414074086 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team) Oct 5 04:39:03 localhost podman[90381]: 2025-10-05 08:39:03.060999956 +0000 UTC m=+0.453372726 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true) Oct 5 04:39:03 localhost podman[90389]: 2025-10-05 08:39:03.07007924 +0000 UTC m=+0.449696169 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:39:03 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90402]: 2025-10-05 08:39:03.035897385 +0000 UTC m=+0.397656356 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:39:03 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90402]: 2025-10-05 08:39:03.123780636 +0000 UTC m=+0.485539637 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4) Oct 5 04:39:03 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90395]: 2025-10-05 08:39:03.361860643 +0000 UTC m=+0.724309103 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:39:03 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:39:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:39:11 localhost systemd[1]: tmp-crun.sHuj92.mount: Deactivated successfully. Oct 5 04:39:11 localhost podman[90552]: 2025-10-05 08:39:11.699771681 +0000 UTC m=+0.102866102 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, version=17.1.9, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Oct 5 04:39:11 localhost podman[90552]: 2025-10-05 08:39:11.735882127 +0000 UTC m=+0.138976578 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, tcib_managed=true, config_id=tripleo_step5, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:39:11 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:39:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:39:23 localhost systemd[1]: tmp-crun.87iwur.mount: Deactivated successfully. Oct 5 04:39:23 localhost podman[90579]: 2025-10-05 08:39:23.689361699 +0000 UTC m=+0.095719921 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, release=1, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:39:23 localhost podman[90579]: 2025-10-05 08:39:23.90103848 +0000 UTC m=+0.307396692 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9) Oct 5 04:39:23 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:39:33 localhost systemd[1]: tmp-crun.zAoOSi.mount: Deactivated successfully. Oct 5 04:39:33 localhost systemd[1]: tmp-crun.qyY0Lh.mount: Deactivated successfully. Oct 5 04:39:33 localhost podman[90630]: 2025-10-05 08:39:33.786968286 +0000 UTC m=+0.154311187 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9) Oct 5 04:39:33 localhost podman[90630]: 2025-10-05 08:39:33.795800643 +0000 UTC m=+0.163143564 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:52, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=logrotate_crond) Oct 5 04:39:33 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:39:33 localhost podman[90622]: 2025-10-05 08:39:33.716548733 +0000 UTC m=+0.091267341 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, batch=17.1_20250721.1) Oct 5 04:39:33 localhost podman[90617]: 2025-10-05 08:39:33.772431238 +0000 UTC m=+0.151068461 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container) Oct 5 04:39:33 localhost podman[90611]: 2025-10-05 08:39:33.839155872 +0000 UTC m=+0.215302199 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, distribution-scope=public, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:39:33 localhost podman[90635]: 2025-10-05 08:39:33.747798229 +0000 UTC m=+0.114717148 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Oct 5 04:39:33 localhost podman[90609]: 2025-10-05 08:39:33.752560216 +0000 UTC m=+0.144012662 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, release=1) Oct 5 04:39:33 localhost podman[90609]: 2025-10-05 08:39:33.885825071 +0000 UTC m=+0.277277537 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:39:33 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:39:33 localhost podman[90622]: 2025-10-05 08:39:33.899954039 +0000 UTC m=+0.274672727 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 04:39:33 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:39:33 localhost podman[90635]: 2025-10-05 08:39:33.927634609 +0000 UTC m=+0.294553528 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:39:33 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:39:33 localhost podman[90608]: 2025-10-05 08:39:33.943678847 +0000 UTC m=+0.340107416 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:39:33 localhost podman[90610]: 2025-10-05 08:39:33.900650217 +0000 UTC m=+0.290786628 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Oct 5 04:39:33 localhost podman[90608]: 2025-10-05 08:39:33.976989318 +0000 UTC m=+0.373417867 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, vcs-type=git, container_name=collectd, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:39:33 localhost podman[90610]: 2025-10-05 08:39:33.986952025 +0000 UTC m=+0.377088406 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team) Oct 5 04:39:33 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:39:34 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90611]: 2025-10-05 08:39:34.029187435 +0000 UTC m=+0.405333752 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:39:34 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90617]: 2025-10-05 08:39:34.106266576 +0000 UTC m=+0.484903779 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:39:34 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:39:38 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:39:38 localhost recover_tripleo_nova_virtqemud[90859]: 62622 Oct 5 04:39:38 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:39:38 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:39:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:39:42 localhost systemd[1]: tmp-crun.D3Wq7B.mount: Deactivated successfully. Oct 5 04:39:42 localhost podman[90860]: 2025-10-05 08:39:42.701347122 +0000 UTC m=+0.101637060 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, version=17.1.9, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:39:42 localhost podman[90860]: 2025-10-05 08:39:42.739871572 +0000 UTC m=+0.140161550 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, release=1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:39:42 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:39:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:39:54 localhost systemd[1]: tmp-crun.H94zBr.mount: Deactivated successfully. Oct 5 04:39:54 localhost podman[90909]: 2025-10-05 08:39:54.677684284 +0000 UTC m=+0.085929289 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:39:54 localhost podman[90909]: 2025-10-05 08:39:54.872232377 +0000 UTC m=+0.280477312 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, config_id=tripleo_step1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:39:54 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:40:04 localhost systemd[1]: tmp-crun.BpQURP.mount: Deactivated successfully. Oct 5 04:40:04 localhost podman[90939]: 2025-10-05 08:40:04.708622961 +0000 UTC m=+0.102836681 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., container_name=ovn_controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:40:04 localhost podman[90956]: 2025-10-05 08:40:04.766101588 +0000 UTC m=+0.144366982 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:40:04 localhost podman[90940]: 2025-10-05 08:40:04.810097055 +0000 UTC m=+0.201144520 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:40:04 localhost podman[90956]: 2025-10-05 08:40:04.813723762 +0000 UTC m=+0.191989166 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, release=1, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:40:04 localhost podman[90940]: 2025-10-05 08:40:04.821616803 +0000 UTC m=+0.212664268 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:40:04 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:40:04 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:40:04 localhost podman[90964]: 2025-10-05 08:40:04.736879977 +0000 UTC m=+0.108352269 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:40:04 localhost podman[90939]: 2025-10-05 08:40:04.84208886 +0000 UTC m=+0.236302610 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:40:04 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:40:04 localhost podman[90938]: 2025-10-05 08:40:04.760877278 +0000 UTC m=+0.140410836 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2) Oct 5 04:40:04 localhost podman[90964]: 2025-10-05 08:40:04.917291992 +0000 UTC m=+0.288764344 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:40:04 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:40:04 localhost podman[90959]: 2025-10-05 08:40:04.931725178 +0000 UTC m=+0.305209604 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:40:04 localhost podman[90959]: 2025-10-05 08:40:04.970880535 +0000 UTC m=+0.344364971 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-cron-container, architecture=x86_64, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:40:04 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:40:04 localhost podman[90942]: 2025-10-05 08:40:04.992588666 +0000 UTC m=+0.367859480 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:40:05 localhost podman[90950]: 2025-10-05 08:40:05.043254901 +0000 UTC m=+0.424716050 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:40:05 localhost podman[90938]: 2025-10-05 08:40:05.052125808 +0000 UTC m=+0.431659426 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd) Oct 5 04:40:05 localhost podman[90942]: 2025-10-05 08:40:05.054894382 +0000 UTC m=+0.430165166 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 5 04:40:05 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:40:05 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:40:05 localhost podman[90950]: 2025-10-05 08:40:05.420976222 +0000 UTC m=+0.802437411 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 5 04:40:05 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:40:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:40:13 localhost systemd[1]: tmp-crun.PsPAwD.mount: Deactivated successfully. Oct 5 04:40:13 localhost podman[91125]: 2025-10-05 08:40:13.683144213 +0000 UTC m=+0.094216511 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1) Oct 5 04:40:13 localhost podman[91125]: 2025-10-05 08:40:13.710167196 +0000 UTC m=+0.121239514 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git) Oct 5 04:40:13 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:40:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:40:25 localhost systemd[1]: tmp-crun.QnyK83.mount: Deactivated successfully. Oct 5 04:40:25 localhost podman[91152]: 2025-10-05 08:40:25.67859106 +0000 UTC m=+0.090718576 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible) Oct 5 04:40:25 localhost podman[91152]: 2025-10-05 08:40:25.911945821 +0000 UTC m=+0.324073277 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Oct 5 04:40:25 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:40:35 localhost systemd[1]: tmp-crun.L8Rlfw.mount: Deactivated successfully. Oct 5 04:40:35 localhost podman[91191]: 2025-10-05 08:40:35.738332068 +0000 UTC m=+0.124733247 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, release=1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Oct 5 04:40:35 localhost podman[91211]: 2025-10-05 08:40:35.748044457 +0000 UTC m=+0.117295297 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Oct 5 04:40:35 localhost podman[91192]: 2025-10-05 08:40:35.797515531 +0000 UTC m=+0.170549753 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:40:35 localhost podman[91184]: 2025-10-05 08:40:35.777146856 +0000 UTC m=+0.168267501 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, release=1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:40:35 localhost podman[91198]: 2025-10-05 08:40:35.833617466 +0000 UTC m=+0.211286261 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Oct 5 04:40:35 localhost podman[91211]: 2025-10-05 08:40:35.851409552 +0000 UTC m=+0.220660392 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=) Oct 5 04:40:35 localhost podman[91191]: 2025-10-05 08:40:35.859750315 +0000 UTC m=+0.246151534 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Oct 5 04:40:35 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:40:35 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:40:35 localhost podman[91198]: 2025-10-05 08:40:35.88163124 +0000 UTC m=+0.259300055 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:40:35 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:40:35 localhost podman[91210]: 2025-10-05 08:40:35.907937814 +0000 UTC m=+0.279801024 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:40:35 localhost podman[91184]: 2025-10-05 08:40:35.912055454 +0000 UTC m=+0.303176109 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, config_id=tripleo_step4) Oct 5 04:40:35 localhost podman[91185]: 2025-10-05 08:40:35.941366527 +0000 UTC m=+0.328182307 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, version=17.1.9, container_name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:40:35 localhost podman[91185]: 2025-10-05 08:40:35.953987615 +0000 UTC m=+0.340803455 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:40:35 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:40:35 localhost podman[91210]: 2025-10-05 08:40:35.97398412 +0000 UTC m=+0.345847380 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=logrotate_crond, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:40:35 localhost podman[91183]: 2025-10-05 08:40:35.984629545 +0000 UTC m=+0.382244364 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd) Oct 5 04:40:35 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:40:35 localhost podman[91183]: 2025-10-05 08:40:35.99680215 +0000 UTC m=+0.394416979 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, tcib_managed=true, distribution-scope=public, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:40:36 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:40:36 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:40:36 localhost podman[91192]: 2025-10-05 08:40:36.180887924 +0000 UTC m=+0.553922156 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64) Oct 5 04:40:36 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:40:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:40:44 localhost systemd[1]: tmp-crun.1RNVUI.mount: Deactivated successfully. Oct 5 04:40:44 localhost podman[91442]: 2025-10-05 08:40:44.685661434 +0000 UTC m=+0.083661029 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=nova_compute, version=17.1.9) Oct 5 04:40:44 localhost podman[91442]: 2025-10-05 08:40:44.722012666 +0000 UTC m=+0.120012311 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:40:44 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:40:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:40:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:40:56 localhost recover_tripleo_nova_virtqemud[91475]: 62622 Oct 5 04:40:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:40:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:40:56 localhost podman[91468]: 2025-10-05 08:40:56.681436717 +0000 UTC m=+0.086731810 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, release=1, config_id=tripleo_step1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 5 04:40:56 localhost podman[91468]: 2025-10-05 08:40:56.846784129 +0000 UTC m=+0.252079202 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, release=1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:40:56 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:41:06 localhost systemd[1]: tmp-crun.0JwLye.mount: Deactivated successfully. Oct 5 04:41:06 localhost podman[91501]: 2025-10-05 08:41:06.705772875 +0000 UTC m=+0.096195703 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step3) Oct 5 04:41:06 localhost systemd[1]: tmp-crun.Jbd8hW.mount: Deactivated successfully. Oct 5 04:41:06 localhost podman[91528]: 2025-10-05 08:41:06.717628363 +0000 UTC m=+0.091449347 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:41:06 localhost podman[91528]: 2025-10-05 08:41:06.797444067 +0000 UTC m=+0.171265021 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Oct 5 04:41:06 localhost podman[91502]: 2025-10-05 08:41:06.754644403 +0000 UTC m=+0.143941791 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, vcs-type=git, build-date=2025-07-21T14:45:33) Oct 5 04:41:06 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:41:06 localhost podman[91502]: 2025-10-05 08:41:06.83981424 +0000 UTC m=+0.229111588 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git) Oct 5 04:41:06 localhost podman[91515]: 2025-10-05 08:41:06.846631343 +0000 UTC m=+0.228968395 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, release=1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, version=17.1.9) Oct 5 04:41:06 localhost podman[91501]: 2025-10-05 08:41:06.848667017 +0000 UTC m=+0.239089845 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, build-date=2025-07-21T13:27:15, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, container_name=iscsid) Oct 5 04:41:06 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:41:06 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:41:06 localhost podman[91515]: 2025-10-05 08:41:06.874526049 +0000 UTC m=+0.256863091 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:41:06 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:41:06 localhost podman[91500]: 2025-10-05 08:41:06.802515283 +0000 UTC m=+0.197748060 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller) Oct 5 04:41:06 localhost podman[91499]: 2025-10-05 08:41:06.914544139 +0000 UTC m=+0.313413443 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=2, distribution-scope=public) Oct 5 04:41:06 localhost podman[91500]: 2025-10-05 08:41:06.93477001 +0000 UTC m=+0.330002787 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=ovn_controller, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:41:06 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:41:06 localhost podman[91499]: 2025-10-05 08:41:06.961877045 +0000 UTC m=+0.360746389 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, release=2, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:41:06 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:41:06 localhost podman[91509]: 2025-10-05 08:41:06.778553532 +0000 UTC m=+0.162651821 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.openshift.expose-services=, release=1) Oct 5 04:41:07 localhost podman[91526]: 2025-10-05 08:41:07.013350071 +0000 UTC m=+0.389579439 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, tcib_managed=true, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:41:07 localhost podman[91526]: 2025-10-05 08:41:07.050826924 +0000 UTC m=+0.427056242 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vcs-type=git) Oct 5 04:41:07 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:41:07 localhost podman[91509]: 2025-10-05 08:41:07.171886612 +0000 UTC m=+0.555984931 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:41:07 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:41:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:41:15 localhost podman[91695]: 2025-10-05 08:41:15.681999415 +0000 UTC m=+0.085438796 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64) Oct 5 04:41:15 localhost podman[91695]: 2025-10-05 08:41:15.710644801 +0000 UTC m=+0.114084202 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 5 04:41:15 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:41:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:41:27 localhost podman[91721]: 2025-10-05 08:41:27.660300991 +0000 UTC m=+0.070617220 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:41:27 localhost podman[91721]: 2025-10-05 08:41:27.853302523 +0000 UTC m=+0.263618762 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, release=1, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:41:27 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:41:31 localhost sshd[91751]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:41:31 localhost sshd[91752]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:41:37 localhost systemd[1]: tmp-crun.OQUCRq.mount: Deactivated successfully. Oct 5 04:41:37 localhost podman[91779]: 2025-10-05 08:41:37.715533407 +0000 UTC m=+0.096647446 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, name=rhosp17/openstack-cron) Oct 5 04:41:37 localhost podman[91756]: 2025-10-05 08:41:37.704644416 +0000 UTC m=+0.101863505 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid) Oct 5 04:41:37 localhost podman[91768]: 2025-10-05 08:41:37.746752852 +0000 UTC m=+0.128284642 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:41:37 localhost podman[91779]: 2025-10-05 08:41:37.801014873 +0000 UTC m=+0.182128922 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Oct 5 04:41:37 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:41:37 localhost podman[91757]: 2025-10-05 08:41:37.820135724 +0000 UTC m=+0.204199942 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:41:37 localhost podman[91774]: 2025-10-05 08:41:37.777812743 +0000 UTC m=+0.159297982 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:41:37 localhost podman[91756]: 2025-10-05 08:41:37.840133199 +0000 UTC m=+0.237352298 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1, config_id=tripleo_step3) Oct 5 04:41:37 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:41:37 localhost podman[91774]: 2025-10-05 08:41:37.86182737 +0000 UTC m=+0.243312629 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:41:37 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:41:37 localhost podman[91755]: 2025-10-05 08:41:37.918643179 +0000 UTC m=+0.315918399 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, version=17.1.9, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:41:37 localhost podman[91757]: 2025-10-05 08:41:37.930846376 +0000 UTC m=+0.314910674 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, build-date=2025-07-21T14:45:33, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:41:37 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:41:37 localhost podman[91755]: 2025-10-05 08:41:37.9538401 +0000 UTC m=+0.351115330 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Oct 5 04:41:37 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:41:37 localhost podman[91754]: 2025-10-05 08:41:37.970845275 +0000 UTC m=+0.370541730 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, release=2, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1) Oct 5 04:41:38 localhost podman[91780]: 2025-10-05 08:41:38.034634021 +0000 UTC m=+0.408844975 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git) Oct 5 04:41:38 localhost podman[91754]: 2025-10-05 08:41:38.057651997 +0000 UTC m=+0.457348452 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, version=17.1.9, maintainer=OpenStack TripleO Team, release=2, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:41:38 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91780]: 2025-10-05 08:41:38.083867038 +0000 UTC m=+0.458077982 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:41:38 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91768]: 2025-10-05 08:41:38.098141179 +0000 UTC m=+0.479673039 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, release=1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:41:38 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:41:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:41:46 localhost podman[92002]: 2025-10-05 08:41:46.667269981 +0000 UTC m=+0.068328708 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, architecture=x86_64, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:41:46 localhost podman[92002]: 2025-10-05 08:41:46.698787874 +0000 UTC m=+0.099846631 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12) Oct 5 04:41:46 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:41:58 localhost podman[92028]: 2025-10-05 08:41:58.67719471 +0000 UTC m=+0.088625490 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.buildah.version=1.33.12, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git) Oct 5 04:41:58 localhost podman[92028]: 2025-10-05 08:41:58.859933908 +0000 UTC m=+0.271364658 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:41:58 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:42:08 localhost systemd[1]: tmp-crun.CYnsQ8.mount: Deactivated successfully. Oct 5 04:42:08 localhost podman[92087]: 2025-10-05 08:42:08.711638903 +0000 UTC m=+0.085961019 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=logrotate_crond, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Oct 5 04:42:08 localhost podman[92061]: 2025-10-05 08:42:08.725999997 +0000 UTC m=+0.116220029 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, io.openshift.expose-services=) Oct 5 04:42:08 localhost podman[92061]: 2025-10-05 08:42:08.745778487 +0000 UTC m=+0.135998519 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, version=17.1.9, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:42:08 localhost podman[92087]: 2025-10-05 08:42:08.746155297 +0000 UTC m=+0.120477443 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:42:08 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:42:08 localhost podman[92062]: 2025-10-05 08:42:08.765672749 +0000 UTC m=+0.158397968 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, release=1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:42:08 localhost podman[92062]: 2025-10-05 08:42:08.805657778 +0000 UTC m=+0.198382997 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, distribution-scope=public, release=1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git) Oct 5 04:42:08 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:42:08 localhost podman[92063]: 2025-10-05 08:42:08.817619557 +0000 UTC m=+0.206331618 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:42:08 localhost podman[92093]: 2025-10-05 08:42:08.875187357 +0000 UTC m=+0.243907834 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:42:08 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:42:08 localhost podman[92093]: 2025-10-05 08:42:08.925705428 +0000 UTC m=+0.294425935 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.9, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 04:42:08 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:42:08 localhost podman[92071]: 2025-10-05 08:42:08.982189469 +0000 UTC m=+0.368667871 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.9, build-date=2025-07-21T14:48:37, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1) Oct 5 04:42:09 localhost podman[92079]: 2025-10-05 08:42:09.031711664 +0000 UTC m=+0.413087949 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:42:09 localhost podman[92060]: 2025-10-05 08:42:09.083965711 +0000 UTC m=+0.482991688 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, container_name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Oct 5 04:42:09 localhost podman[92060]: 2025-10-05 08:42:09.092075298 +0000 UTC m=+0.491101335 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, container_name=collectd, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:42:09 localhost podman[92063]: 2025-10-05 08:42:09.100998196 +0000 UTC m=+0.489710267 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 04:42:09 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:42:09 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92079]: 2025-10-05 08:42:09.137165353 +0000 UTC m=+0.518541618 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:42:09 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92071]: 2025-10-05 08:42:09.383319727 +0000 UTC m=+0.769798089 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Oct 5 04:42:09 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:42:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:42:17 localhost podman[92242]: 2025-10-05 08:42:17.677416818 +0000 UTC m=+0.081395688 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., release=1, architecture=x86_64, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5) Oct 5 04:42:17 localhost podman[92242]: 2025-10-05 08:42:17.706701041 +0000 UTC m=+0.110679861 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37) Oct 5 04:42:17 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:42:29 localhost podman[92267]: 2025-10-05 08:42:29.674050876 +0000 UTC m=+0.083605877 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., vcs-type=git) Oct 5 04:42:29 localhost podman[92267]: 2025-10-05 08:42:29.862243659 +0000 UTC m=+0.271798630 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:42:29 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:42:31 localhost sshd[92297]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:42:39 localhost systemd[1]: tmp-crun.tB3dWZ.mount: Deactivated successfully. Oct 5 04:42:39 localhost podman[92299]: 2025-10-05 08:42:39.739114598 +0000 UTC m=+0.142370359 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.component=openstack-collectd-container, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, release=2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9) Oct 5 04:42:39 localhost podman[92327]: 2025-10-05 08:42:39.695032709 +0000 UTC m=+0.077965085 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:42:39 localhost podman[92302]: 2025-10-05 08:42:39.800819718 +0000 UTC m=+0.194526123 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, version=17.1.9) Oct 5 04:42:39 localhost podman[92302]: 2025-10-05 08:42:39.853505107 +0000 UTC m=+0.247211552 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T14:45:33) Oct 5 04:42:39 localhost podman[92300]: 2025-10-05 08:42:39.854105283 +0000 UTC m=+0.253098769 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Oct 5 04:42:39 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:42:39 localhost podman[92321]: 2025-10-05 08:42:39.917579251 +0000 UTC m=+0.305703246 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:07:52) Oct 5 04:42:39 localhost podman[92312]: 2025-10-05 08:42:39.7208375 +0000 UTC m=+0.106525860 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:42:39 localhost podman[92299]: 2025-10-05 08:42:39.926942711 +0000 UTC m=+0.330198462 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:42:39 localhost podman[92308]: 2025-10-05 08:42:39.957499788 +0000 UTC m=+0.348259324 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:42:39 localhost podman[92300]: 2025-10-05 08:42:39.960985802 +0000 UTC m=+0.359979308 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:42:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:42:39 localhost podman[92327]: 2025-10-05 08:42:39.976599039 +0000 UTC m=+0.359531455 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Oct 5 04:42:40 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92312]: 2025-10-05 08:42:40.055066378 +0000 UTC m=+0.440754778 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:42:40 localhost podman[92301]: 2025-10-05 08:42:40.06561401 +0000 UTC m=+0.462064938 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:42:40 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92321]: 2025-10-05 08:42:40.07910641 +0000 UTC m=+0.467230355 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1) Oct 5 04:42:40 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92301]: 2025-10-05 08:42:40.102859376 +0000 UTC m=+0.499310314 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Oct 5 04:42:40 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:42:40 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92308]: 2025-10-05 08:42:40.302331511 +0000 UTC m=+0.693091097 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, architecture=x86_64, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:42:40 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:42:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:42:42 localhost recover_tripleo_nova_virtqemud[92496]: 62622 Oct 5 04:42:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:42:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:42:48 localhost podman[92559]: 2025-10-05 08:42:48.680878115 +0000 UTC m=+0.085236990 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.9, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:42:48 localhost podman[92559]: 2025-10-05 08:42:48.712808759 +0000 UTC m=+0.117167624 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, io.buildah.version=1.33.12, vcs-type=git, container_name=nova_compute, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:42:48 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:43:00 localhost podman[92586]: 2025-10-05 08:43:00.671583353 +0000 UTC m=+0.083276107 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, batch=17.1_20250721.1) Oct 5 04:43:00 localhost podman[92586]: 2025-10-05 08:43:00.844802566 +0000 UTC m=+0.256495260 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:43:00 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:43:10 localhost podman[92615]: 2025-10-05 08:43:10.697667441 +0000 UTC m=+0.096557623 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:43:10 localhost systemd[1]: tmp-crun.jacgxq.mount: Deactivated successfully. Oct 5 04:43:10 localhost podman[92616]: 2025-10-05 08:43:10.721068267 +0000 UTC m=+0.116971629 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Oct 5 04:43:10 localhost podman[92624]: 2025-10-05 08:43:10.727994912 +0000 UTC m=+0.117471953 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, release=1, vcs-type=git, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:43:10 localhost podman[92616]: 2025-10-05 08:43:10.731632589 +0000 UTC m=+0.127535961 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:43:10 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:43:10 localhost podman[92615]: 2025-10-05 08:43:10.749631531 +0000 UTC m=+0.148521703 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:43:10 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:43:10 localhost podman[92646]: 2025-10-05 08:43:10.810265262 +0000 UTC m=+0.186294033 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.9) Oct 5 04:43:10 localhost podman[92617]: 2025-10-05 08:43:10.828335456 +0000 UTC m=+0.208871978 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:43:10 localhost podman[92617]: 2025-10-05 08:43:10.854973147 +0000 UTC m=+0.235509639 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, release=1, tcib_managed=true, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:43:10 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:43:10 localhost podman[92614]: 2025-10-05 08:43:10.867877433 +0000 UTC m=+0.270581288 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, vcs-type=git) Oct 5 04:43:10 localhost podman[92614]: 2025-10-05 08:43:10.874665044 +0000 UTC m=+0.277368909 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:04:03, container_name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, release=2, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, name=rhosp17/openstack-collectd) Oct 5 04:43:10 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:43:10 localhost podman[92647]: 2025-10-05 08:43:10.940790103 +0000 UTC m=+0.311336837 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:43:10 localhost podman[92646]: 2025-10-05 08:43:10.947135952 +0000 UTC m=+0.323164793 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:43:10 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92647]: 2025-10-05 08:43:11.012633245 +0000 UTC m=+0.383179969 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:43:11 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92630]: 2025-10-05 08:43:11.023349161 +0000 UTC m=+0.398478548 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, config_id=tripleo_step4, version=17.1.9, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc.) Oct 5 04:43:11 localhost podman[92630]: 2025-10-05 08:43:11.051686709 +0000 UTC m=+0.426816116 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public) Oct 5 04:43:11 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92624]: 2025-10-05 08:43:11.076734849 +0000 UTC m=+0.466211900 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4) Oct 5 04:43:11 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:43:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:43:19 localhost podman[92795]: 2025-10-05 08:43:19.683337743 +0000 UTC m=+0.092559317 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:43:19 localhost podman[92795]: 2025-10-05 08:43:19.744274533 +0000 UTC m=+0.153496077 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Oct 5 04:43:19 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:43:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:43:31 localhost systemd[1]: tmp-crun.vO8112.mount: Deactivated successfully. Oct 5 04:43:31 localhost podman[92821]: 2025-10-05 08:43:31.689874558 +0000 UTC m=+0.096801020 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:43:31 localhost podman[92821]: 2025-10-05 08:43:31.881876943 +0000 UTC m=+0.288803395 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:43:31 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:43:41 localhost podman[92850]: 2025-10-05 08:43:41.693993587 +0000 UTC m=+0.096133702 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, container_name=collectd, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:43:41 localhost systemd[1]: tmp-crun.IB3NQz.mount: Deactivated successfully. Oct 5 04:43:41 localhost podman[92850]: 2025-10-05 08:43:41.713669023 +0000 UTC m=+0.115809158 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, container_name=collectd, managed_by=tripleo_ansible, release=2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1) Oct 5 04:43:41 localhost podman[92851]: 2025-10-05 08:43:41.746529522 +0000 UTC m=+0.145759819 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, vcs-type=git) Oct 5 04:43:41 localhost podman[92851]: 2025-10-05 08:43:41.766078945 +0000 UTC m=+0.165309242 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:43:41 localhost podman[92853]: 2025-10-05 08:43:41.722937191 +0000 UTC m=+0.103077468 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:43:41 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:43:41 localhost podman[92853]: 2025-10-05 08:43:41.808810378 +0000 UTC m=+0.188950685 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public) Oct 5 04:43:41 localhost podman[92871]: 2025-10-05 08:43:41.815837415 +0000 UTC m=+0.199613999 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=logrotate_crond, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container) Oct 5 04:43:41 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:43:41 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:43:41 localhost podman[92852]: 2025-10-05 08:43:41.859897514 +0000 UTC m=+0.257886588 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, container_name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team) Oct 5 04:43:41 localhost podman[92872]: 2025-10-05 08:43:41.809531957 +0000 UTC m=+0.185626525 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53) Oct 5 04:43:41 localhost podman[92852]: 2025-10-05 08:43:41.895002463 +0000 UTC m=+0.292991557 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 04:43:41 localhost podman[92871]: 2025-10-05 08:43:41.89525522 +0000 UTC m=+0.279031774 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, vcs-type=git, architecture=x86_64, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.buildah.version=1.33.12, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9) Oct 5 04:43:41 localhost podman[92859]: 2025-10-05 08:43:41.89415206 +0000 UTC m=+0.280708358 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible) Oct 5 04:43:41 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:43:41 localhost podman[92872]: 2025-10-05 08:43:41.940113209 +0000 UTC m=+0.316207827 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:43:41 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92870]: 2025-10-05 08:43:42.028824051 +0000 UTC m=+0.412749589 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team) Oct 5 04:43:42 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92870]: 2025-10-05 08:43:42.110795344 +0000 UTC m=+0.494720882 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47) Oct 5 04:43:42 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92859]: 2025-10-05 08:43:42.262930763 +0000 UTC m=+0.649487041 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37) Oct 5 04:43:42 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:43:42 localhost systemd[1]: tmp-crun.meJRgG.mount: Deactivated successfully. Oct 5 04:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:43:50 localhost podman[93103]: 2025-10-05 08:43:50.677766819 +0000 UTC m=+0.085400956 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, container_name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:43:50 localhost podman[93103]: 2025-10-05 08:43:50.711693616 +0000 UTC m=+0.119327793 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:43:50 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:44:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:44:00 localhost recover_tripleo_nova_virtqemud[93130]: 62622 Oct 5 04:44:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:44:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:44:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:44:02 localhost systemd[1]: tmp-crun.SXNZcu.mount: Deactivated successfully. Oct 5 04:44:02 localhost podman[93131]: 2025-10-05 08:44:02.667204483 +0000 UTC m=+0.077455293 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=metrics_qdr, release=1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible) Oct 5 04:44:02 localhost podman[93131]: 2025-10-05 08:44:02.841778861 +0000 UTC m=+0.252029681 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:44:02 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:44:12 localhost systemd[1]: tmp-crun.l9ozht.mount: Deactivated successfully. Oct 5 04:44:12 localhost systemd[1]: tmp-crun.Cy86bM.mount: Deactivated successfully. Oct 5 04:44:12 localhost podman[93174]: 2025-10-05 08:44:12.789423518 +0000 UTC m=+0.141100014 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, release=1, batch=17.1_20250721.1) Oct 5 04:44:12 localhost podman[93186]: 2025-10-05 08:44:12.738582788 +0000 UTC m=+0.085598040 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Oct 5 04:44:12 localhost podman[93173]: 2025-10-05 08:44:12.770461541 +0000 UTC m=+0.121860430 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:44:12 localhost podman[93160]: 2025-10-05 08:44:12.72106814 +0000 UTC m=+0.121207813 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, name=rhosp17/openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Oct 5 04:44:12 localhost podman[93176]: 2025-10-05 08:44:12.827529107 +0000 UTC m=+0.172822563 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:44:12 localhost podman[93160]: 2025-10-05 08:44:12.849608107 +0000 UTC m=+0.249747740 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container) Oct 5 04:44:12 localhost podman[93175]: 2025-10-05 08:44:12.753889757 +0000 UTC m=+0.102607174 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, release=1, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:44:12 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:44:12 localhost podman[93176]: 2025-10-05 08:44:12.861693421 +0000 UTC m=+0.206986857 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:44:12 localhost podman[93186]: 2025-10-05 08:44:12.872030067 +0000 UTC m=+0.219045329 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:44:12 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:44:12 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:44:12 localhost podman[93175]: 2025-10-05 08:44:12.886759441 +0000 UTC m=+0.235476868 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:44:12 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:44:12 localhost podman[93161]: 2025-10-05 08:44:12.937864298 +0000 UTC m=+0.298064312 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1) Oct 5 04:44:12 localhost podman[93161]: 2025-10-05 08:44:12.963519394 +0000 UTC m=+0.323719438 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller) Oct 5 04:44:12 localhost podman[93172]: 2025-10-05 08:44:12.97384401 +0000 UTC m=+0.334345843 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:27:15, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, release=1, version=17.1.9, config_id=tripleo_step3, managed_by=tripleo_ansible) Oct 5 04:44:12 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93173]: 2025-10-05 08:44:13.005020044 +0000 UTC m=+0.356418933 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:44:13 localhost podman[93172]: 2025-10-05 08:44:13.010677445 +0000 UTC m=+0.371179308 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, version=17.1.9) Oct 5 04:44:13 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:44:13 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93174]: 2025-10-05 08:44:13.148928842 +0000 UTC m=+0.500605388 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1, tcib_managed=true) Oct 5 04:44:13 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:44:21 localhost systemd[1]: tmp-crun.MvFTHm.mount: Deactivated successfully. Oct 5 04:44:21 localhost podman[93336]: 2025-10-05 08:44:21.706284442 +0000 UTC m=+0.107548577 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1) Oct 5 04:44:21 localhost podman[93336]: 2025-10-05 08:44:21.760714657 +0000 UTC m=+0.161978772 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, release=1, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37) Oct 5 04:44:21 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:44:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:44:33 localhost systemd[1]: tmp-crun.SfPeav.mount: Deactivated successfully. Oct 5 04:44:33 localhost podman[93362]: 2025-10-05 08:44:33.680522148 +0000 UTC m=+0.092477134 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public) Oct 5 04:44:33 localhost podman[93362]: 2025-10-05 08:44:33.880810265 +0000 UTC m=+0.292765231 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:44:33 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:44:43 localhost systemd[1]: tmp-crun.PLYA8p.mount: Deactivated successfully. Oct 5 04:44:43 localhost podman[93392]: 2025-10-05 08:44:43.753691422 +0000 UTC m=+0.158622222 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, release=2, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, container_name=collectd, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc.) Oct 5 04:44:43 localhost podman[93392]: 2025-10-05 08:44:43.762762515 +0000 UTC m=+0.167693315 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, release=2, distribution-scope=public) Oct 5 04:44:43 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:44:43 localhost podman[93419]: 2025-10-05 08:44:43.726610438 +0000 UTC m=+0.106880529 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, container_name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:44:43 localhost podman[93395]: 2025-10-05 08:44:43.695918848 +0000 UTC m=+0.093132042 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible) Oct 5 04:44:43 localhost podman[93419]: 2025-10-05 08:44:43.806257248 +0000 UTC m=+0.186527359 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, release=1, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:44:43 localhost podman[93420]: 2025-10-05 08:44:43.767613715 +0000 UTC m=+0.144469854 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible) Oct 5 04:44:43 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:44:43 localhost podman[93393]: 2025-10-05 08:44:43.855904597 +0000 UTC m=+0.255287689 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:44:43 localhost podman[93402]: 2025-10-05 08:44:43.879054145 +0000 UTC m=+0.259354527 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, release=1, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:48:37) Oct 5 04:44:43 localhost podman[93408]: 2025-10-05 08:44:43.928256981 +0000 UTC m=+0.305301636 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:44:43 localhost podman[93393]: 2025-10-05 08:44:43.943883469 +0000 UTC m=+0.343266621 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git) Oct 5 04:44:43 localhost podman[93393]: unhealthy Oct 5 04:44:43 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:44:43 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:44:43 localhost podman[93408]: 2025-10-05 08:44:43.964627543 +0000 UTC m=+0.341672208 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:44:43 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:44:43 localhost podman[93394]: 2025-10-05 08:44:43.97759196 +0000 UTC m=+0.373308304 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.buildah.version=1.33.12) Oct 5 04:44:43 localhost podman[93395]: 2025-10-05 08:44:43.981751392 +0000 UTC m=+0.378964566 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:44:43 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93420]: 2025-10-05 08:44:44.003725709 +0000 UTC m=+0.380581918 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1) Oct 5 04:44:44 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93394]: 2025-10-05 08:44:44.035053668 +0000 UTC m=+0.430770052 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:44:44 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93402]: 2025-10-05 08:44:44.249831141 +0000 UTC m=+0.630131543 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Oct 5 04:44:44 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:44:52 localhost podman[93683]: 2025-10-05 08:44:52.67977029 +0000 UTC m=+0.090176762 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute) Oct 5 04:44:52 localhost podman[93683]: 2025-10-05 08:44:52.716929545 +0000 UTC m=+0.127335977 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.9, distribution-scope=public) Oct 5 04:44:52 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:45:04 localhost systemd[1]: tmp-crun.k89uQh.mount: Deactivated successfully. Oct 5 04:45:04 localhost podman[93724]: 2025-10-05 08:45:04.703356509 +0000 UTC m=+0.102926843 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Oct 5 04:45:04 localhost podman[93724]: 2025-10-05 08:45:04.904077888 +0000 UTC m=+0.303648172 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:45:04 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5435 writes, 24K keys, 5435 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5435 writes, 715 syncs, 7.60 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8 writes, 19 keys, 8 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 8 writes, 4 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:45:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:45:14 localhost recover_tripleo_nova_virtqemud[93794]: 62622 Oct 5 04:45:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:45:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:45:14 localhost podman[93756]: 2025-10-05 08:45:14.716691193 +0000 UTC m=+0.111970035 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:45:14 localhost podman[93753]: 2025-10-05 08:45:14.738914937 +0000 UTC m=+0.120800941 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, container_name=collectd) Oct 5 04:45:14 localhost podman[93756]: 2025-10-05 08:45:14.744757614 +0000 UTC m=+0.140036506 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1) Oct 5 04:45:14 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:45:14 localhost podman[93757]: 2025-10-05 08:45:14.761953523 +0000 UTC m=+0.155691445 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, container_name=nova_migration_target, maintainer=OpenStack TripleO Team) Oct 5 04:45:14 localhost podman[93753]: 2025-10-05 08:45:14.823014637 +0000 UTC m=+0.204900661 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, container_name=collectd, release=2, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:45:14 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:45:14 localhost podman[93755]: 2025-10-05 08:45:14.879036605 +0000 UTC m=+0.274370989 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:45:14 localhost podman[93755]: 2025-10-05 08:45:14.887809719 +0000 UTC m=+0.283144073 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, version=17.1.9, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, release=1, managed_by=tripleo_ansible) Oct 5 04:45:14 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:45:14 localhost podman[93758]: 2025-10-05 08:45:14.991502052 +0000 UTC m=+0.376784138 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:29:47) Oct 5 04:45:15 localhost podman[93765]: 2025-10-05 08:45:14.844496111 +0000 UTC m=+0.221435264 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:45:15 localhost podman[93754]: 2025-10-05 08:45:14.965829496 +0000 UTC m=+0.360892143 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, container_name=ovn_controller, managed_by=tripleo_ansible, release=1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:45:15 localhost podman[93758]: 2025-10-05 08:45:15.024011022 +0000 UTC m=+0.409293118 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_ipmi, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:45:15 localhost podman[93765]: 2025-10-05 08:45:15.03180052 +0000 UTC m=+0.408739743 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, release=1, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 5 04:45:15 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93754]: 2025-10-05 08:45:15.046983466 +0000 UTC m=+0.442046123 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:45:15 localhost podman[93754]: unhealthy Oct 5 04:45:15 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Deactivated successfully. Oct 5 04:45:15 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:45:15 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:45:15 localhost podman[93759]: 2025-10-05 08:45:15.124654754 +0000 UTC m=+0.506630500 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-07-21T13:07:52, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:45:15 localhost podman[93757]: 2025-10-05 08:45:15.131955609 +0000 UTC m=+0.525693521 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:45:15 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93759]: 2025-10-05 08:45:15.167079258 +0000 UTC m=+0.549055064 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-cron-container, vcs-type=git, tcib_managed=true) Oct 5 04:45:15 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:45:23 localhost podman[93934]: 2025-10-05 08:45:23.694533346 +0000 UTC m=+0.095519436 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:48:37, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:45:23 localhost podman[93934]: 2025-10-05 08:45:23.725816582 +0000 UTC m=+0.126802682 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public) Oct 5 04:45:23 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:45:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:45:35 localhost podman[93960]: 2025-10-05 08:45:35.680940339 +0000 UTC m=+0.092615128 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, version=17.1.9) Oct 5 04:45:35 localhost podman[93960]: 2025-10-05 08:45:35.889276411 +0000 UTC m=+0.300951260 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:45:35 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:45:45 localhost systemd[1]: tmp-crun.1Lty9L.mount: Deactivated successfully. Oct 5 04:45:45 localhost podman[94000]: 2025-10-05 08:45:45.707582823 +0000 UTC m=+0.090842521 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64) Oct 5 04:45:45 localhost podman[93989]: 2025-10-05 08:45:45.767839604 +0000 UTC m=+0.156775954 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, vcs-type=git, version=17.1.9, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, container_name=iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1) Oct 5 04:45:45 localhost podman[93988]: 2025-10-05 08:45:45.773594128 +0000 UTC m=+0.166677968 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:45:45 localhost podman[93987]: 2025-10-05 08:45:45.798922145 +0000 UTC m=+0.197845611 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, container_name=collectd, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=2, architecture=x86_64) Oct 5 04:45:45 localhost podman[93988]: 2025-10-05 08:45:45.821619492 +0000 UTC m=+0.214703322 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:45:45 localhost podman[93988]: unhealthy Oct 5 04:45:45 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:45:45 localhost podman[94008]: 2025-10-05 08:45:45.835522984 +0000 UTC m=+0.207412888 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1, distribution-scope=public, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:45:45 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:45:45 localhost podman[94002]: 2025-10-05 08:45:45.816066303 +0000 UTC m=+0.192979332 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:45:45 localhost podman[93989]: 2025-10-05 08:45:45.882195262 +0000 UTC m=+0.271131612 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., release=1) Oct 5 04:45:45 localhost podman[93987]: 2025-10-05 08:45:45.887142905 +0000 UTC m=+0.286066431 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, release=2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible) Oct 5 04:45:45 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:45:45 localhost podman[94008]: 2025-10-05 08:45:45.902578317 +0000 UTC m=+0.274468231 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4) Oct 5 04:45:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:45:45 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:45:45 localhost podman[93995]: 2025-10-05 08:45:45.884459143 +0000 UTC m=+0.267319301 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:45:45 localhost podman[94002]: 2025-10-05 08:45:45.953635342 +0000 UTC m=+0.330548401 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:45:45 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:45:45 localhost podman[93995]: 2025-10-05 08:45:45.967722939 +0000 UTC m=+0.350583097 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, release=1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:45:45 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:45:46 localhost podman[94019]: 2025-10-05 08:45:45.749289877 +0000 UTC m=+0.118932521 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:45:46 localhost podman[94019]: 2025-10-05 08:45:46.037840394 +0000 UTC m=+0.407483028 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:45:46 localhost podman[94019]: unhealthy Oct 5 04:45:46 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:45:46 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:45:46 localhost podman[94000]: 2025-10-05 08:45:46.069920053 +0000 UTC m=+0.453179761 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container) Oct 5 04:45:46 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:45:46 localhost systemd[1]: tmp-crun.Syx6W9.mount: Deactivated successfully. Oct 5 04:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:45:54 localhost podman[94166]: 2025-10-05 08:45:54.459257486 +0000 UTC m=+0.098409492 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, config_id=tripleo_step5, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:45:54 localhost podman[94166]: 2025-10-05 08:45:54.4867113 +0000 UTC m=+0.125863326 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible) Oct 5 04:45:54 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:45:55 localhost podman[94311]: Oct 5 04:45:55 localhost podman[94311]: 2025-10-05 08:45:55.976959027 +0000 UTC m=+0.078777108 container create 7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_joliot, GIT_BRANCH=main, name=rhceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Oct 5 04:45:56 localhost systemd[1]: Started libpod-conmon-7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6.scope. Oct 5 04:45:56 localhost podman[94311]: 2025-10-05 08:45:55.944282223 +0000 UTC m=+0.046100324 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:45:56 localhost systemd[1]: Started libcrun container. Oct 5 04:45:56 localhost podman[94311]: 2025-10-05 08:45:56.063787129 +0000 UTC m=+0.165605210 container init 7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_joliot, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Oct 5 04:45:56 localhost podman[94311]: 2025-10-05 08:45:56.076464458 +0000 UTC m=+0.178282539 container start 7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_joliot, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, release=553, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc.) Oct 5 04:45:56 localhost podman[94311]: 2025-10-05 08:45:56.076794057 +0000 UTC m=+0.178612138 container attach 7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_joliot, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 04:45:56 localhost crazy_joliot[94326]: 167 167 Oct 5 04:45:56 localhost systemd[1]: libpod-7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6.scope: Deactivated successfully. Oct 5 04:45:56 localhost podman[94311]: 2025-10-05 08:45:56.082082458 +0000 UTC m=+0.183900539 container died 7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_joliot, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:45:56 localhost podman[94331]: 2025-10-05 08:45:56.177940882 +0000 UTC m=+0.084565663 container remove 7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_joliot, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Oct 5 04:45:56 localhost systemd[1]: libpod-conmon-7dc2d59c9e68699bc40f5694ff46104c9c3d8be8449b8e253c07b7c8afbd9de6.scope: Deactivated successfully. Oct 5 04:45:56 localhost podman[94352]: Oct 5 04:45:56 localhost podman[94352]: 2025-10-05 08:45:56.396867967 +0000 UTC m=+0.076097796 container create 5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_darwin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, distribution-scope=public, version=7, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Oct 5 04:45:56 localhost systemd[1]: Started libpod-conmon-5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7.scope. Oct 5 04:45:56 localhost systemd[1]: Started libcrun container. Oct 5 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b0eb55744e1ec6e120f4953240fae398d72f9a87f018df4d691ffb57428873/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b0eb55744e1ec6e120f4953240fae398d72f9a87f018df4d691ffb57428873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b0eb55744e1ec6e120f4953240fae398d72f9a87f018df4d691ffb57428873/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 04:45:56 localhost podman[94352]: 2025-10-05 08:45:56.366011222 +0000 UTC m=+0.045241101 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:45:56 localhost podman[94352]: 2025-10-05 08:45:56.466289273 +0000 UTC m=+0.145519112 container init 5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_darwin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vendor=Red Hat, Inc.) Oct 5 04:45:56 localhost podman[94352]: 2025-10-05 08:45:56.477822022 +0000 UTC m=+0.157051851 container start 5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_darwin, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 04:45:56 localhost podman[94352]: 2025-10-05 08:45:56.47812244 +0000 UTC m=+0.157352279 container attach 5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_darwin, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:45:56 localhost systemd[1]: var-lib-containers-storage-overlay-0bd7e53d295b0fe73521622987f14a644f0da53e1ffa165fa4c2132698debd9c-merged.mount: Deactivated successfully. Oct 5 04:45:57 localhost peaceful_darwin[94368]: [ Oct 5 04:45:57 localhost peaceful_darwin[94368]: { Oct 5 04:45:57 localhost peaceful_darwin[94368]: "available": false, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "ceph_device": false, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "lsm_data": {}, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "lvs": [], Oct 5 04:45:57 localhost peaceful_darwin[94368]: "path": "/dev/sr0", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "rejected_reasons": [ Oct 5 04:45:57 localhost peaceful_darwin[94368]: "Insufficient space (<5GB)", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "Has a FileSystem" Oct 5 04:45:57 localhost peaceful_darwin[94368]: ], Oct 5 04:45:57 localhost peaceful_darwin[94368]: "sys_api": { Oct 5 04:45:57 localhost peaceful_darwin[94368]: "actuators": null, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "device_nodes": "sr0", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "human_readable_size": "482.00 KB", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "id_bus": "ata", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "model": "QEMU DVD-ROM", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "nr_requests": "2", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "partitions": {}, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "path": "/dev/sr0", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "removable": "1", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "rev": "2.5+", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "ro": "0", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "rotational": "1", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "sas_address": "", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "sas_device_handle": "", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "scheduler_mode": "mq-deadline", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "sectors": 0, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "sectorsize": "2048", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "size": 493568.0, Oct 5 04:45:57 localhost peaceful_darwin[94368]: "support_discard": "0", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "type": "disk", Oct 5 04:45:57 localhost peaceful_darwin[94368]: "vendor": "QEMU" Oct 5 04:45:57 localhost peaceful_darwin[94368]: } Oct 5 04:45:57 localhost peaceful_darwin[94368]: } Oct 5 04:45:57 localhost peaceful_darwin[94368]: ] Oct 5 04:45:57 localhost systemd[1]: libpod-5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7.scope: Deactivated successfully. Oct 5 04:45:57 localhost systemd[1]: libpod-5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7.scope: Consumed 1.039s CPU time. Oct 5 04:45:57 localhost podman[94352]: 2025-10-05 08:45:57.499154027 +0000 UTC m=+1.178383826 container died 5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_darwin, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, name=rhceph, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.component=rhceph-container) Oct 5 04:45:57 localhost systemd[1]: var-lib-containers-storage-overlay-26b0eb55744e1ec6e120f4953240fae398d72f9a87f018df4d691ffb57428873-merged.mount: Deactivated successfully. Oct 5 04:45:57 localhost podman[96321]: 2025-10-05 08:45:57.604585425 +0000 UTC m=+0.095618608 container remove 5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_darwin, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container) Oct 5 04:45:57 localhost systemd[1]: libpod-conmon-5cb0c7f712b19a6ee1f6cbbd5f063eaa163e2a97d8c1ff6a7a2618810551fdd7.scope: Deactivated successfully. Oct 5 04:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:46:06 localhost systemd[1]: tmp-crun.MHEyKd.mount: Deactivated successfully. Oct 5 04:46:06 localhost podman[96350]: 2025-10-05 08:46:06.698914606 +0000 UTC m=+0.109094819 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd) Oct 5 04:46:06 localhost podman[96350]: 2025-10-05 08:46:06.895998536 +0000 UTC m=+0.306178769 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc.) Oct 5 04:46:06 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:46:16 localhost podman[96406]: 2025-10-05 08:46:16.756021281 +0000 UTC m=+0.108319858 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-type=git) Oct 5 04:46:16 localhost podman[96379]: 2025-10-05 08:46:16.722675279 +0000 UTC m=+0.114861442 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:46:16 localhost systemd[1]: tmp-crun.43CUMU.mount: Deactivated successfully. Oct 5 04:46:16 localhost podman[96381]: 2025-10-05 08:46:16.828844419 +0000 UTC m=+0.215979667 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:46:16 localhost podman[96381]: 2025-10-05 08:46:16.842668729 +0000 UTC m=+0.229803997 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:46:16 localhost podman[96379]: 2025-10-05 08:46:16.858149562 +0000 UTC m=+0.250335845 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, release=2, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, container_name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd) Oct 5 04:46:16 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:46:16 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:46:16 localhost podman[96387]: 2025-10-05 08:46:16.938483431 +0000 UTC m=+0.315192791 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, build-date=2025-07-21T14:45:33, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:46:16 localhost podman[96404]: 2025-10-05 08:46:16.983139245 +0000 UTC m=+0.348028518 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64) Oct 5 04:46:16 localhost podman[96387]: 2025-10-05 08:46:16.995749082 +0000 UTC m=+0.372458442 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.9, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:46:17 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:46:17 localhost podman[96398]: 2025-10-05 08:46:17.035492756 +0000 UTC m=+0.401774037 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:46:17 localhost podman[96404]: 2025-10-05 08:46:17.046980723 +0000 UTC m=+0.411870006 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, config_id=tripleo_step4, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 04:46:17 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:46:17 localhost podman[96406]: 2025-10-05 08:46:17.089833449 +0000 UTC m=+0.442131996 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, build-date=2025-07-21T13:07:52, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 5 04:46:17 localhost podman[96380]: 2025-10-05 08:46:17.089965912 +0000 UTC m=+0.479675779 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Oct 5 04:46:17 localhost podman[96410]: 2025-10-05 08:46:16.796922955 +0000 UTC m=+0.150872905 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, batch=17.1_20250721.1) Oct 5 04:46:17 localhost podman[96410]: 2025-10-05 08:46:17.128642127 +0000 UTC m=+0.482592047 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public) Oct 5 04:46:17 localhost podman[96410]: unhealthy Oct 5 04:46:17 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:17 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:46:17 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:46:17 localhost podman[96380]: 2025-10-05 08:46:17.181292725 +0000 UTC m=+0.571002642 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:28:44, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Oct 5 04:46:17 localhost podman[96380]: unhealthy Oct 5 04:46:17 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:17 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:46:17 localhost podman[96398]: 2025-10-05 08:46:17.417011459 +0000 UTC m=+0.783292720 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12) Oct 5 04:46:17 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:46:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:46:24 localhost recover_tripleo_nova_virtqemud[96560]: 62622 Oct 5 04:46:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:46:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:46:24 localhost systemd[1]: tmp-crun.066bpg.mount: Deactivated successfully. Oct 5 04:46:24 localhost podman[96553]: 2025-10-05 08:46:24.702031899 +0000 UTC m=+0.106429388 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Oct 5 04:46:24 localhost podman[96553]: 2025-10-05 08:46:24.731637111 +0000 UTC m=+0.136034770 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 5 04:46:24 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:46:37 localhost podman[96579]: 2025-10-05 08:46:37.680717408 +0000 UTC m=+0.091466666 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:46:37 localhost podman[96579]: 2025-10-05 08:46:37.878864808 +0000 UTC m=+0.289614076 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:46:37 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:46:47 localhost podman[96608]: 2025-10-05 08:46:47.728571708 +0000 UTC m=+0.127234393 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64) Oct 5 04:46:47 localhost podman[96607]: 2025-10-05 08:46:47.756862265 +0000 UTC m=+0.158836309 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:46:47 localhost podman[96607]: 2025-10-05 08:46:47.763196534 +0000 UTC m=+0.165170578 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, container_name=collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true) Oct 5 04:46:47 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:46:47 localhost podman[96608]: 2025-10-05 08:46:47.842928437 +0000 UTC m=+0.241591162 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller) Oct 5 04:46:47 localhost podman[96608]: unhealthy Oct 5 04:46:47 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:47 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:46:47 localhost podman[96628]: 2025-10-05 08:46:47.856997483 +0000 UTC m=+0.223446327 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52) Oct 5 04:46:47 localhost podman[96610]: 2025-10-05 08:46:47.862003337 +0000 UTC m=+0.252306858 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:46:47 localhost podman[96609]: 2025-10-05 08:46:47.716010613 +0000 UTC m=+0.110929168 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:46:47 localhost podman[96638]: 2025-10-05 08:46:47.821619216 +0000 UTC m=+0.198853690 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc.) Oct 5 04:46:47 localhost podman[96610]: 2025-10-05 08:46:47.891949077 +0000 UTC m=+0.282252628 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T14:45:33, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, tcib_managed=true, version=17.1.9) Oct 5 04:46:47 localhost podman[96609]: 2025-10-05 08:46:47.901935405 +0000 UTC m=+0.296853980 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Oct 5 04:46:47 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:46:47 localhost podman[96616]: 2025-10-05 08:46:47.913225547 +0000 UTC m=+0.300864347 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:46:47 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:46:47 localhost podman[96638]: 2025-10-05 08:46:47.954043688 +0000 UTC m=+0.331278131 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1) Oct 5 04:46:47 localhost podman[96638]: unhealthy Oct 5 04:46:47 localhost podman[96622]: 2025-10-05 08:46:47.964618901 +0000 UTC m=+0.348719527 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, release=1, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, tcib_managed=true) Oct 5 04:46:47 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:47 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:46:47 localhost podman[96628]: 2025-10-05 08:46:47.968078894 +0000 UTC m=+0.334527788 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Oct 5 04:46:47 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96622]: 2025-10-05 08:46:48.023904756 +0000 UTC m=+0.408005462 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1) Oct 5 04:46:48 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96616]: 2025-10-05 08:46:48.245935505 +0000 UTC m=+0.633574325 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:46:48 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:46:55 localhost systemd[1]: tmp-crun.vPcriF.mount: Deactivated successfully. Oct 5 04:46:55 localhost podman[96780]: 2025-10-05 08:46:55.688897558 +0000 UTC m=+0.094240212 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_compute, vcs-type=git) Oct 5 04:46:55 localhost podman[96780]: 2025-10-05 08:46:55.716619338 +0000 UTC m=+0.121961962 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true) Oct 5 04:46:55 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:47:08 localhost systemd[1]: tmp-crun.ptU3mM.mount: Deactivated successfully. Oct 5 04:47:08 localhost podman[96933]: 2025-10-05 08:47:08.692911695 +0000 UTC m=+0.104003873 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, version=17.1.9, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, vcs-type=git) Oct 5 04:47:08 localhost podman[96933]: 2025-10-05 08:47:08.916073583 +0000 UTC m=+0.327165721 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:47:08 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:47:18 localhost systemd[1]: tmp-crun.UjT6wj.mount: Deactivated successfully. Oct 5 04:47:18 localhost systemd[1]: tmp-crun.TBNHbl.mount: Deactivated successfully. Oct 5 04:47:18 localhost podman[96983]: 2025-10-05 08:47:18.740217597 +0000 UTC m=+0.106577182 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:47:18 localhost podman[96962]: 2025-10-05 08:47:18.721492055 +0000 UTC m=+0.124745376 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, container_name=collectd, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.33.12, release=2, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:47:18 localhost podman[96964]: 2025-10-05 08:47:18.777061532 +0000 UTC m=+0.169552795 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 5 04:47:18 localhost podman[96962]: 2025-10-05 08:47:18.801200198 +0000 UTC m=+0.204453529 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, release=2, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible) Oct 5 04:47:18 localhost podman[96963]: 2025-10-05 08:47:18.757497648 +0000 UTC m=+0.157554154 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, container_name=ovn_controller) Oct 5 04:47:18 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:47:18 localhost podman[96964]: 2025-10-05 08:47:18.813868656 +0000 UTC m=+0.206359979 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public) Oct 5 04:47:18 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:47:18 localhost podman[96963]: 2025-10-05 08:47:18.843793407 +0000 UTC m=+0.243849893 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1) Oct 5 04:47:18 localhost podman[96963]: unhealthy Oct 5 04:47:18 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:18 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:47:18 localhost podman[96969]: 2025-10-05 08:47:18.888701948 +0000 UTC m=+0.275195351 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:47:18 localhost podman[96969]: 2025-10-05 08:47:18.917335743 +0000 UTC m=+0.303829106 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:47:18 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:47:18 localhost podman[96983]: 2025-10-05 08:47:18.931627785 +0000 UTC m=+0.297987370 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=) Oct 5 04:47:18 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:47:18 localhost podman[96977]: 2025-10-05 08:47:18.815731766 +0000 UTC m=+0.196208248 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:47:18 localhost podman[96976]: 2025-10-05 08:47:18.977574675 +0000 UTC m=+0.364147390 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1) Oct 5 04:47:19 localhost podman[96997]: 2025-10-05 08:47:19.033341456 +0000 UTC m=+0.406679247 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:47:19 localhost podman[96977]: 2025-10-05 08:47:19.052243752 +0000 UTC m=+0.432720244 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:47:19 localhost podman[96997]: 2025-10-05 08:47:19.097121692 +0000 UTC m=+0.470459443 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 04:47:19 localhost podman[96997]: unhealthy Oct 5 04:47:19 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:47:19 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:19 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:47:19 localhost podman[96976]: 2025-10-05 08:47:19.355908733 +0000 UTC m=+0.742481438 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, version=17.1.9) Oct 5 04:47:19 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:47:26 localhost podman[97129]: 2025-10-05 08:47:26.693995314 +0000 UTC m=+0.099035409 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, container_name=nova_compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible) Oct 5 04:47:26 localhost podman[97129]: 2025-10-05 08:47:26.730920363 +0000 UTC m=+0.135960508 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, container_name=nova_compute, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git) Oct 5 04:47:26 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:47:39 localhost systemd[1]: tmp-crun.skrB4I.mount: Deactivated successfully. Oct 5 04:47:39 localhost podman[97157]: 2025-10-05 08:47:39.685123397 +0000 UTC m=+0.096305867 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:47:39 localhost podman[97157]: 2025-10-05 08:47:39.929847622 +0000 UTC m=+0.341030062 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:47:39 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:47:49 localhost podman[97207]: 2025-10-05 08:47:49.706765606 +0000 UTC m=+0.084858011 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:07:52, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 04:47:49 localhost podman[97207]: 2025-10-05 08:47:49.743731764 +0000 UTC m=+0.121824159 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-cron) Oct 5 04:47:49 localhost podman[97202]: 2025-10-05 08:47:49.753461045 +0000 UTC m=+0.138255829 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 04:47:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:47:49 localhost podman[97202]: 2025-10-05 08:47:49.776656275 +0000 UTC m=+0.161451069 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git) Oct 5 04:47:49 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:47:49 localhost systemd[1]: tmp-crun.NyxY1i.mount: Deactivated successfully. Oct 5 04:47:49 localhost podman[97190]: 2025-10-05 08:47:49.82470818 +0000 UTC m=+0.210637154 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 5 04:47:49 localhost podman[97187]: 2025-10-05 08:47:49.858838712 +0000 UTC m=+0.260296951 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container) Oct 5 04:47:49 localhost podman[97200]: 2025-10-05 08:47:49.880260966 +0000 UTC m=+0.261490914 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:47:49 localhost podman[97187]: 2025-10-05 08:47:49.900876647 +0000 UTC m=+0.302334896 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:04:03) Oct 5 04:47:49 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:47:49 localhost podman[97190]: 2025-10-05 08:47:49.919487565 +0000 UTC m=+0.305416599 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, distribution-scope=public, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:47:49 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:47:49 localhost podman[97188]: 2025-10-05 08:47:49.922516856 +0000 UTC m=+0.321582132 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, version=17.1.9) Oct 5 04:47:49 localhost podman[97189]: 2025-10-05 08:47:49.978317798 +0000 UTC m=+0.369082802 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:47:49 localhost podman[97189]: 2025-10-05 08:47:49.984811981 +0000 UTC m=+0.375576975 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:47:49 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:47:50 localhost podman[97188]: 2025-10-05 08:47:50.005790353 +0000 UTC m=+0.404855589 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:47:50 localhost podman[97188]: unhealthy Oct 5 04:47:50 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:50 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:47:50 localhost podman[97210]: 2025-10-05 08:47:50.083471341 +0000 UTC m=+0.457203939 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team) Oct 5 04:47:50 localhost podman[97210]: 2025-10-05 08:47:50.100902527 +0000 UTC m=+0.474635095 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:47:50 localhost podman[97210]: unhealthy Oct 5 04:47:50 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:50 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:47:50 localhost podman[97200]: 2025-10-05 08:47:50.222008765 +0000 UTC m=+0.603238723 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:47:50 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:47:57 localhost systemd[1]: tmp-crun.Brzap2.mount: Deactivated successfully. Oct 5 04:47:57 localhost podman[97361]: 2025-10-05 08:47:57.698489565 +0000 UTC m=+0.109555391 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, build-date=2025-07-21T14:48:37) Oct 5 04:47:57 localhost podman[97361]: 2025-10-05 08:47:57.757543384 +0000 UTC m=+0.168609200 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, version=17.1.9) Oct 5 04:47:57 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:48:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:48:00 localhost recover_tripleo_nova_virtqemud[97406]: 62622 Oct 5 04:48:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:48:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:48:10 localhost systemd[1]: tmp-crun.qTgY30.mount: Deactivated successfully. Oct 5 04:48:10 localhost podman[97470]: 2025-10-05 08:48:10.716422104 +0000 UTC m=+0.114731539 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, release=1, architecture=x86_64, build-date=2025-07-21T13:07:59, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-qdrouterd, tcib_managed=true) Oct 5 04:48:10 localhost podman[97470]: 2025-10-05 08:48:10.962936547 +0000 UTC m=+0.361245992 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:48:10 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:48:20 localhost systemd[1]: tmp-crun.LknTy3.mount: Deactivated successfully. Oct 5 04:48:20 localhost podman[97499]: 2025-10-05 08:48:20.719072833 +0000 UTC m=+0.119817325 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Oct 5 04:48:20 localhost podman[97499]: 2025-10-05 08:48:20.761696913 +0000 UTC m=+0.162441405 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd) Oct 5 04:48:20 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:48:20 localhost podman[97507]: 2025-10-05 08:48:20.733089658 +0000 UTC m=+0.115307825 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute) Oct 5 04:48:20 localhost podman[97521]: 2025-10-05 08:48:20.843941903 +0000 UTC m=+0.213229414 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:48:20 localhost podman[97501]: 2025-10-05 08:48:20.767209851 +0000 UTC m=+0.158518691 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:48:20 localhost podman[97516]: 2025-10-05 08:48:20.820681931 +0000 UTC m=+0.199851137 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1, build-date=2025-07-21T13:07:52, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:48:20 localhost podman[97502]: 2025-10-05 08:48:20.884905008 +0000 UTC m=+0.258726651 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:48:20 localhost podman[97516]: 2025-10-05 08:48:20.900208427 +0000 UTC m=+0.279377573 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond) Oct 5 04:48:20 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:48:20 localhost podman[97502]: 2025-10-05 08:48:20.913585805 +0000 UTC m=+0.287407458 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:48:20 localhost podman[97521]: 2025-10-05 08:48:20.923547931 +0000 UTC m=+0.292835392 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:48:20 localhost podman[97521]: unhealthy Oct 5 04:48:20 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:48:20 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:20 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:48:20 localhost podman[97513]: 2025-10-05 08:48:20.785620863 +0000 UTC m=+0.166572146 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64) Oct 5 04:48:20 localhost podman[97500]: 2025-10-05 08:48:20.970278781 +0000 UTC m=+0.365616408 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.openshift.expose-services=, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Oct 5 04:48:20 localhost podman[97500]: 2025-10-05 08:48:20.986241748 +0000 UTC m=+0.381579455 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:48:20 localhost podman[97500]: unhealthy Oct 5 04:48:20 localhost podman[97501]: 2025-10-05 08:48:20.998597958 +0000 UTC m=+0.389906848 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, vcs-type=git, container_name=iscsid) Oct 5 04:48:21 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:21 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:48:21 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:48:21 localhost podman[97507]: 2025-10-05 08:48:21.057714969 +0000 UTC m=+0.439933206 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, container_name=nova_migration_target, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:48:21 localhost podman[97513]: 2025-10-05 08:48:21.067352098 +0000 UTC m=+0.448303401 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, container_name=ceilometer_agent_ipmi) Oct 5 04:48:21 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:48:21 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:48:28 localhost podman[97668]: 2025-10-05 08:48:28.688483895 +0000 UTC m=+0.091814316 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, distribution-scope=public) Oct 5 04:48:28 localhost podman[97668]: 2025-10-05 08:48:28.74885557 +0000 UTC m=+0.152186011 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public) Oct 5 04:48:28 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:48:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:48:41 localhost podman[97696]: 2025-10-05 08:48:41.705896699 +0000 UTC m=+0.104207863 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1) Oct 5 04:48:41 localhost podman[97696]: 2025-10-05 08:48:41.910945603 +0000 UTC m=+0.309256767 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:48:41 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:48:51 localhost podman[97726]: 2025-10-05 08:48:51.721521985 +0000 UTC m=+0.122026004 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=2, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.openshift.expose-services=) Oct 5 04:48:51 localhost podman[97726]: 2025-10-05 08:48:51.754846105 +0000 UTC m=+0.155350144 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, vcs-type=git) Oct 5 04:48:51 localhost systemd[1]: tmp-crun.N4pC5T.mount: Deactivated successfully. Oct 5 04:48:51 localhost podman[97727]: 2025-10-05 08:48:51.767429035 +0000 UTC m=+0.166518026 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 04:48:51 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:48:51 localhost podman[97735]: 2025-10-05 08:48:51.780310342 +0000 UTC m=+0.169022613 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, container_name=nova_migration_target) Oct 5 04:48:51 localhost podman[97728]: 2025-10-05 08:48:51.829477229 +0000 UTC m=+0.225199250 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15) Oct 5 04:48:51 localhost podman[97727]: 2025-10-05 08:48:51.840917248 +0000 UTC m=+0.240006219 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:48:51 localhost podman[97742]: 2025-10-05 08:48:51.885103821 +0000 UTC m=+0.267379639 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:48:51 localhost podman[97749]: 2025-10-05 08:48:51.840569359 +0000 UTC m=+0.219000812 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 04:48:51 localhost podman[97728]: 2025-10-05 08:48:51.897790633 +0000 UTC m=+0.293512654 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:48:51 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:48:51 localhost podman[97742]: 2025-10-05 08:48:51.936756765 +0000 UTC m=+0.319032573 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:48:51 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:48:51 localhost podman[97727]: unhealthy Oct 5 04:48:51 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:51 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:48:51 localhost podman[97749]: 2025-10-05 08:48:51.975447019 +0000 UTC m=+0.353878472 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, architecture=x86_64, release=1, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:48:51 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:48:51 localhost podman[97755]: 2025-10-05 08:48:51.940875966 +0000 UTC m=+0.316785001 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:48:52 localhost podman[97729]: 2025-10-05 08:48:51.901461923 +0000 UTC m=+0.286707131 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, release=1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:48:52 localhost podman[97755]: 2025-10-05 08:48:52.023844405 +0000 UTC m=+0.399753430 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.9, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Oct 5 04:48:52 localhost podman[97755]: unhealthy Oct 5 04:48:52 localhost podman[97729]: 2025-10-05 08:48:52.033853126 +0000 UTC m=+0.419098314 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., release=1) Oct 5 04:48:52 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:52 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:48:52 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:48:52 localhost podman[97735]: 2025-10-05 08:48:52.121856231 +0000 UTC m=+0.510568562 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:48:52 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:48:52 localhost systemd[1]: tmp-crun.171qcZ.mount: Deactivated successfully. Oct 5 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:48:59 localhost podman[97898]: 2025-10-05 08:48:59.685687079 +0000 UTC m=+0.091083039 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T14:48:37) Oct 5 04:48:59 localhost podman[97898]: 2025-10-05 08:48:59.717944811 +0000 UTC m=+0.123340731 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true) Oct 5 04:48:59 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:49:03 localhost podman[98026]: 2025-10-05 08:49:03.527188039 +0000 UTC m=+0.089477876 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, release=553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:49:03 localhost podman[98026]: 2025-10-05 08:49:03.657981288 +0000 UTC m=+0.220271125 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=) Oct 5 04:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:49:12 localhost podman[98170]: 2025-10-05 08:49:12.703811601 +0000 UTC m=+0.102976730 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:49:12 localhost podman[98170]: 2025-10-05 08:49:12.927776786 +0000 UTC m=+0.326941895 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:49:12 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:49:13 localhost sshd[98199]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:49:22 localhost systemd[1]: tmp-crun.fdrNXS.mount: Deactivated successfully. Oct 5 04:49:22 localhost podman[98201]: 2025-10-05 08:49:22.740205377 +0000 UTC m=+0.139970419 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, release=2, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, container_name=collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true) Oct 5 04:49:22 localhost podman[98201]: 2025-10-05 08:49:22.748213724 +0000 UTC m=+0.147978686 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:49:22 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:49:22 localhost podman[98203]: 2025-10-05 08:49:22.762665104 +0000 UTC m=+0.158001756 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-iscsid) Oct 5 04:49:22 localhost podman[98202]: 2025-10-05 08:49:22.873576688 +0000 UTC m=+0.271128800 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.9, container_name=ovn_controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true) Oct 5 04:49:22 localhost podman[98204]: 2025-10-05 08:49:22.833912186 +0000 UTC m=+0.223962796 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible) Oct 5 04:49:22 localhost podman[98203]: 2025-10-05 08:49:22.885610212 +0000 UTC m=+0.280946924 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 04:49:22 localhost podman[98233]: 2025-10-05 08:49:22.889469786 +0000 UTC m=+0.269406352 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:49:22 localhost podman[98222]: 2025-10-05 08:49:22.863173047 +0000 UTC m=+0.239165527 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, tcib_managed=true, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:49:22 localhost podman[98204]: 2025-10-05 08:49:22.914933883 +0000 UTC m=+0.304984453 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 5 04:49:22 localhost podman[98233]: 2025-10-05 08:49:22.929581309 +0000 UTC m=+0.309517815 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-07-21T16:28:53, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20250721.1) Oct 5 04:49:22 localhost podman[98218]: 2025-10-05 08:49:22.932090976 +0000 UTC m=+0.314542620 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:49:22 localhost podman[98222]: 2025-10-05 08:49:22.944037639 +0000 UTC m=+0.320030129 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, container_name=logrotate_crond, io.openshift.expose-services=, release=1, vcs-type=git, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:49:22 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:49:22 localhost podman[98202]: 2025-10-05 08:49:22.964216344 +0000 UTC m=+0.361768486 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:49:22 localhost podman[98202]: unhealthy Oct 5 04:49:22 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:22 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:49:22 localhost podman[98233]: unhealthy Oct 5 04:49:23 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:23 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:49:23 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[98210]: 2025-10-05 08:49:22.965623131 +0000 UTC m=+0.355682041 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.9, container_name=nova_migration_target, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:49:23 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[98218]: 2025-10-05 08:49:23.042767304 +0000 UTC m=+0.425218958 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, architecture=x86_64, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:49:23 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[98210]: 2025-10-05 08:49:23.401907438 +0000 UTC m=+0.791966348 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:49:23 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:49:30 localhost systemd[1]: tmp-crun.L6PFqt.mount: Deactivated successfully. Oct 5 04:49:30 localhost podman[98369]: 2025-10-05 08:49:30.673415415 +0000 UTC m=+0.084643405 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 5 04:49:30 localhost podman[98369]: 2025-10-05 08:49:30.697741582 +0000 UTC m=+0.108969572 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, version=17.1.9, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:49:30 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:49:43 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:49:43 localhost recover_tripleo_nova_virtqemud[98402]: 62622 Oct 5 04:49:43 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:49:43 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:49:43 localhost podman[98395]: 2025-10-05 08:49:43.672523721 +0000 UTC m=+0.076449825 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:49:43 localhost podman[98395]: 2025-10-05 08:49:43.868778098 +0000 UTC m=+0.272704162 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:49:43 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:49:53 localhost systemd[1]: tmp-crun.Qv3Rl5.mount: Deactivated successfully. Oct 5 04:49:53 localhost podman[98452]: 2025-10-05 08:49:53.741661043 +0000 UTC m=+0.115925769 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:49:53 localhost podman[98428]: 2025-10-05 08:49:53.758278842 +0000 UTC m=+0.147236385 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.9, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:49:53 localhost podman[98425]: 2025-10-05 08:49:53.824100159 +0000 UTC m=+0.222190378 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 5 04:49:53 localhost podman[98452]: 2025-10-05 08:49:53.829777932 +0000 UTC m=+0.204042718 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, distribution-scope=public) Oct 5 04:49:53 localhost podman[98452]: unhealthy Oct 5 04:49:53 localhost podman[98425]: 2025-10-05 08:49:53.838782805 +0000 UTC m=+0.236873054 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, tcib_managed=true, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, release=2, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, container_name=collectd, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:49:53 localhost podman[98434]: 2025-10-05 08:49:53.788656442 +0000 UTC m=+0.158400386 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1) Oct 5 04:49:53 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:53 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:49:53 localhost podman[98435]: 2025-10-05 08:49:53.868498588 +0000 UTC m=+0.250580716 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, distribution-scope=public, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:49:53 localhost podman[98426]: 2025-10-05 08:49:53.716940247 +0000 UTC m=+0.109875778 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44) Oct 5 04:49:53 localhost podman[98427]: 2025-10-05 08:49:53.917367986 +0000 UTC m=+0.310520392 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1) Oct 5 04:49:53 localhost podman[98427]: 2025-10-05 08:49:53.952286449 +0000 UTC m=+0.345438855 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15) Oct 5 04:49:53 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:49:53 localhost podman[98446]: 2025-10-05 08:49:53.977451958 +0000 UTC m=+0.354652904 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, release=1) Oct 5 04:49:54 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98446]: 2025-10-05 08:49:54.014611071 +0000 UTC m=+0.391811987 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 04:49:54 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98428]: 2025-10-05 08:49:54.049154453 +0000 UTC m=+0.438111976 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:49:54 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98426]: 2025-10-05 08:49:54.099647216 +0000 UTC m=+0.492582787 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Oct 5 04:49:54 localhost podman[98426]: unhealthy Oct 5 04:49:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:49:54 localhost podman[98435]: 2025-10-05 08:49:54.150724785 +0000 UTC m=+0.532806993 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git) Oct 5 04:49:54 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98434]: 2025-10-05 08:49:54.162956195 +0000 UTC m=+0.532700209 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:49:54 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:50:01 localhost systemd[1]: tmp-crun.ySi0Il.mount: Deactivated successfully. Oct 5 04:50:01 localhost podman[98588]: 2025-10-05 08:50:01.686430584 +0000 UTC m=+0.095300343 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:50:01 localhost podman[98588]: 2025-10-05 08:50:01.717568326 +0000 UTC m=+0.126438085 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step5, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:50:01 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:50:14 localhost podman[98693]: 2025-10-05 08:50:14.693499634 +0000 UTC m=+0.099124287 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.33.12, release=1, version=17.1.9, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, batch=17.1_20250721.1) Oct 5 04:50:14 localhost podman[98693]: 2025-10-05 08:50:14.883841361 +0000 UTC m=+0.289466014 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, tcib_managed=true, version=17.1.9, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:50:14 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:50:24 localhost podman[98735]: 2025-10-05 08:50:24.718441875 +0000 UTC m=+0.101307035 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:50:24 localhost podman[98730]: 2025-10-05 08:50:24.73864779 +0000 UTC m=+0.108960622 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:50:24 localhost podman[98724]: 2025-10-05 08:50:24.818673581 +0000 UTC m=+0.209085115 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public) Oct 5 04:50:24 localhost podman[98724]: 2025-10-05 08:50:24.828706741 +0000 UTC m=+0.219118255 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, container_name=iscsid, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:50:24 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:50:24 localhost podman[98722]: 2025-10-05 08:50:24.861143247 +0000 UTC m=+0.262637070 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=2, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:50:24 localhost podman[98723]: 2025-10-05 08:50:24.769486593 +0000 UTC m=+0.165596691 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, config_id=tripleo_step4, release=1) Oct 5 04:50:24 localhost podman[98743]: 2025-10-05 08:50:24.793485381 +0000 UTC m=+0.171482349 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:50:24 localhost podman[98723]: 2025-10-05 08:50:24.903008127 +0000 UTC m=+0.299118245 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_controller, managed_by=tripleo_ansible) Oct 5 04:50:24 localhost podman[98723]: unhealthy Oct 5 04:50:24 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:24 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:50:24 localhost podman[98737]: 2025-10-05 08:50:24.872200336 +0000 UTC m=+0.254116541 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 5 04:50:24 localhost podman[98743]: 2025-10-05 08:50:24.923091729 +0000 UTC m=+0.301088707 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 04:50:24 localhost podman[98730]: 2025-10-05 08:50:24.97315927 +0000 UTC m=+0.343472112 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Oct 5 04:50:24 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98722]: 2025-10-05 08:50:24.999867562 +0000 UTC m=+0.401361405 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:50:25 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:50:25 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98737]: 2025-10-05 08:50:25.050792026 +0000 UTC m=+0.432708211 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:50:25 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98745]: 2025-10-05 08:50:24.974180548 +0000 UTC m=+0.347590853 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:50:25 localhost podman[98735]: 2025-10-05 08:50:25.089276734 +0000 UTC m=+0.472141944 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, tcib_managed=true) Oct 5 04:50:25 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98745]: 2025-10-05 08:50:25.107652911 +0000 UTC m=+0.481063216 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 5 04:50:25 localhost podman[98745]: unhealthy Oct 5 04:50:25 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:25 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:50:25 localhost systemd[1]: tmp-crun.pjZs3I.mount: Deactivated successfully. Oct 5 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:50:32 localhost systemd[1]: tmp-crun.aw523r.mount: Deactivated successfully. Oct 5 04:50:32 localhost podman[98890]: 2025-10-05 08:50:32.689633892 +0000 UTC m=+0.100144755 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:50:32 localhost podman[98890]: 2025-10-05 08:50:32.745005015 +0000 UTC m=+0.155515838 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:50:32 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:50:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:50:45 localhost podman[98916]: 2025-10-05 08:50:45.702245784 +0000 UTC m=+0.113546377 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:50:45 localhost podman[98916]: 2025-10-05 08:50:45.917926505 +0000 UTC m=+0.329227068 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git) Oct 5 04:50:45 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:50:55 localhost systemd[1]: tmp-crun.lT7T1p.mount: Deactivated successfully. Oct 5 04:50:55 localhost podman[98948]: 2025-10-05 08:50:55.77581525 +0000 UTC m=+0.167990405 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15) Oct 5 04:50:55 localhost podman[98973]: 2025-10-05 08:50:55.732951013 +0000 UTC m=+0.104750148 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, managed_by=tripleo_ansible, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4) Oct 5 04:50:55 localhost podman[98949]: 2025-10-05 08:50:55.762190522 +0000 UTC m=+0.149842316 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9) Oct 5 04:50:55 localhost podman[98973]: 2025-10-05 08:50:55.817758892 +0000 UTC m=+0.189558047 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, architecture=x86_64, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:50:55 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:50:55 localhost podman[98955]: 2025-10-05 08:50:55.83438315 +0000 UTC m=+0.218404706 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, release=1, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:50:55 localhost podman[98947]: 2025-10-05 08:50:55.821990126 +0000 UTC m=+0.217180903 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller) Oct 5 04:50:55 localhost podman[98967]: 2025-10-05 08:50:55.884187695 +0000 UTC m=+0.256668539 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, release=1, version=17.1.9) Oct 5 04:50:55 localhost podman[98947]: 2025-10-05 08:50:55.904641587 +0000 UTC m=+0.299832384 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, container_name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, release=1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Oct 5 04:50:55 localhost podman[98947]: unhealthy Oct 5 04:50:55 localhost podman[98946]: 2025-10-05 08:50:55.706263933 +0000 UTC m=+0.106937047 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:50:55 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:55 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:50:55 localhost podman[98967]: 2025-10-05 08:50:55.938686156 +0000 UTC m=+0.311167050 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:50:55 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:50:55 localhost podman[98948]: 2025-10-05 08:50:55.964350379 +0000 UTC m=+0.356525614 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 04:50:55 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:50:55 localhost podman[98946]: 2025-10-05 08:50:55.991966504 +0000 UTC m=+0.392639588 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:50:56 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98949]: 2025-10-05 08:50:56.043906006 +0000 UTC m=+0.431558190 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, tcib_managed=true, container_name=ceilometer_agent_compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9) Oct 5 04:50:56 localhost podman[98979]: 2025-10-05 08:50:56.05070352 +0000 UTC m=+0.413421280 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:50:56 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98979]: 2025-10-05 08:50:56.092861248 +0000 UTC m=+0.455578968 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:50:56 localhost podman[98979]: unhealthy Oct 5 04:50:56 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:56 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:50:56 localhost podman[98955]: 2025-10-05 08:50:56.186190386 +0000 UTC m=+0.570211982 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:50:56 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:51:03 localhost podman[99115]: 2025-10-05 08:51:03.676634952 +0000 UTC m=+0.084330377 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, container_name=nova_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 04:51:03 localhost podman[99115]: 2025-10-05 08:51:03.704979847 +0000 UTC m=+0.112675232 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, config_id=tripleo_step5, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:51:03 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:51:07 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:51:07 localhost recover_tripleo_nova_virtqemud[99158]: 62622 Oct 5 04:51:07 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:51:07 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:51:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:51:16 localhost podman[99222]: 2025-10-05 08:51:16.695304103 +0000 UTC m=+0.094690417 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, vcs-type=git, batch=17.1_20250721.1, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1) Oct 5 04:51:16 localhost podman[99222]: 2025-10-05 08:51:16.895830056 +0000 UTC m=+0.295216340 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, release=1, io.openshift.expose-services=) Oct 5 04:51:16 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:51:26 localhost podman[99272]: 2025-10-05 08:51:26.716080635 +0000 UTC m=+0.092824666 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1) Oct 5 04:51:26 localhost podman[99265]: 2025-10-05 08:51:26.765624523 +0000 UTC m=+0.152267982 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 04:51:26 localhost podman[99254]: 2025-10-05 08:51:26.756853276 +0000 UTC m=+0.150234956 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:51:26 localhost podman[99252]: 2025-10-05 08:51:26.809801036 +0000 UTC m=+0.208964762 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, container_name=ovn_controller) Oct 5 04:51:26 localhost podman[99272]: 2025-10-05 08:51:26.818268124 +0000 UTC m=+0.195012155 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 04:51:26 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:51:26 localhost podman[99253]: 2025-10-05 08:51:26.859769034 +0000 UTC m=+0.255226560 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, release=1, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:51:26 localhost podman[99251]: 2025-10-05 08:51:26.866461685 +0000 UTC m=+0.267948884 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, container_name=collectd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1) Oct 5 04:51:26 localhost podman[99253]: 2025-10-05 08:51:26.869649201 +0000 UTC m=+0.265106707 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Oct 5 04:51:26 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:51:26 localhost podman[99252]: 2025-10-05 08:51:26.898599002 +0000 UTC m=+0.297762708 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:51:26 localhost podman[99252]: unhealthy Oct 5 04:51:26 localhost podman[99277]: 2025-10-05 08:51:26.906941447 +0000 UTC m=+0.285286702 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, distribution-scope=public, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:51:26 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:26 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:51:26 localhost podman[99277]: 2025-10-05 08:51:26.917664397 +0000 UTC m=+0.296009652 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, container_name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:51:26 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:51:26 localhost podman[99254]: 2025-10-05 08:51:26.940174114 +0000 UTC m=+0.333555784 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, version=17.1.9, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:51:26 localhost podman[99251]: 2025-10-05 08:51:26.952278451 +0000 UTC m=+0.353765650 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., version=17.1.9, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Oct 5 04:51:26 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:51:26 localhost podman[99282]: 2025-10-05 08:51:26.967163543 +0000 UTC m=+0.335515107 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:51:27 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:51:27 localhost podman[99282]: 2025-10-05 08:51:27.011805078 +0000 UTC m=+0.380156672 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64) Oct 5 04:51:27 localhost podman[99282]: unhealthy Oct 5 04:51:27 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:27 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:51:27 localhost podman[99265]: 2025-10-05 08:51:27.103725049 +0000 UTC m=+0.490368498 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=) Oct 5 04:51:27 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:51:34 localhost podman[99424]: 2025-10-05 08:51:34.687530676 +0000 UTC m=+0.087881423 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 04:51:34 localhost podman[99424]: 2025-10-05 08:51:34.721715669 +0000 UTC m=+0.122066366 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20250721.1, container_name=nova_compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, com.redhat.component=openstack-nova-compute-container) Oct 5 04:51:34 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:51:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:51:47 localhost systemd[1]: tmp-crun.0RTvB7.mount: Deactivated successfully. Oct 5 04:51:47 localhost podman[99450]: 2025-10-05 08:51:47.671055073 +0000 UTC m=+0.085284673 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:51:47 localhost podman[99450]: 2025-10-05 08:51:47.905957503 +0000 UTC m=+0.320187113 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, release=1, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible) Oct 5 04:51:47 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:51:57 localhost podman[99481]: 2025-10-05 08:51:57.686414932 +0000 UTC m=+0.083892466 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Oct 5 04:51:57 localhost podman[99512]: 2025-10-05 08:51:57.713688277 +0000 UTC m=+0.082373994 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9) Oct 5 04:51:57 localhost podman[99481]: 2025-10-05 08:51:57.718016304 +0000 UTC m=+0.115493858 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:51:57 localhost podman[99505]: 2025-10-05 08:51:57.756072191 +0000 UTC m=+0.132266591 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron) Oct 5 04:51:57 localhost podman[99512]: 2025-10-05 08:51:57.759789382 +0000 UTC m=+0.128475079 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Oct 5 04:51:57 localhost podman[99512]: unhealthy Oct 5 04:51:57 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:57 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:51:57 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:51:57 localhost podman[99505]: 2025-10-05 08:51:57.815626429 +0000 UTC m=+0.191820829 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond) Oct 5 04:51:57 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:51:57 localhost podman[99499]: 2025-10-05 08:51:57.833607534 +0000 UTC m=+0.208548539 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:51:57 localhost podman[99499]: 2025-10-05 08:51:57.869827562 +0000 UTC m=+0.244768577 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4) Oct 5 04:51:57 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:51:57 localhost podman[99484]: 2025-10-05 08:51:57.897193581 +0000 UTC m=+0.279972268 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1) Oct 5 04:51:57 localhost podman[99480]: 2025-10-05 08:51:57.805913037 +0000 UTC m=+0.203904605 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 04:51:57 localhost podman[99484]: 2025-10-05 08:51:57.927581341 +0000 UTC m=+0.310360028 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:51:57 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:51:57 localhost podman[99480]: 2025-10-05 08:51:57.93977839 +0000 UTC m=+0.337769948 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 04:51:57 localhost podman[99480]: unhealthy Oct 5 04:51:57 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:57 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:51:58 localhost podman[99479]: 2025-10-05 08:51:58.012838492 +0000 UTC m=+0.413796140 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, container_name=collectd, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:51:58 localhost podman[99479]: 2025-10-05 08:51:58.022218205 +0000 UTC m=+0.423175853 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 04:51:58 localhost podman[99498]: 2025-10-05 08:51:57.97981049 +0000 UTC m=+0.359260598 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:51:58 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:51:58 localhost podman[99498]: 2025-10-05 08:51:58.381221116 +0000 UTC m=+0.760671234 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:51:58 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:52:05 localhost podman[99647]: 2025-10-05 08:52:05.699647068 +0000 UTC m=+0.102906639 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team) Oct 5 04:52:05 localhost podman[99647]: 2025-10-05 08:52:05.736738549 +0000 UTC m=+0.139998120 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, container_name=nova_compute, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 5 04:52:05 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:52:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:52:18 localhost systemd[1]: tmp-crun.6owTT4.mount: Deactivated successfully. Oct 5 04:52:18 localhost podman[99749]: 2025-10-05 08:52:18.709605225 +0000 UTC m=+0.112075016 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9) Oct 5 04:52:18 localhost podman[99749]: 2025-10-05 08:52:18.910371764 +0000 UTC m=+0.312841555 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, distribution-scope=public) Oct 5 04:52:18 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:52:28 localhost podman[99781]: 2025-10-05 08:52:28.728511212 +0000 UTC m=+0.120454856 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vcs-type=git, container_name=iscsid, distribution-scope=public, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 04:52:28 localhost systemd[1]: tmp-crun.hjxAwr.mount: Deactivated successfully. Oct 5 04:52:28 localhost podman[99794]: 2025-10-05 08:52:28.783869647 +0000 UTC m=+0.162047278 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:52:28 localhost podman[99780]: 2025-10-05 08:52:28.832907131 +0000 UTC m=+0.228305660 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, container_name=ovn_controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1) Oct 5 04:52:28 localhost podman[99806]: 2025-10-05 08:52:28.79832038 +0000 UTC m=+0.162994174 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc.) Oct 5 04:52:28 localhost podman[99782]: 2025-10-05 08:52:28.876628889 +0000 UTC m=+0.264635207 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:45:33) Oct 5 04:52:28 localhost podman[99794]: 2025-10-05 08:52:28.885072249 +0000 UTC m=+0.263249880 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:52:28 localhost podman[99781]: 2025-10-05 08:52:28.895042911 +0000 UTC m=+0.286986545 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, release=1, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, container_name=iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Oct 5 04:52:28 localhost podman[99785]: 2025-10-05 08:52:28.927602086 +0000 UTC m=+0.305958732 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:52:28 localhost podman[99782]: 2025-10-05 08:52:28.933878197 +0000 UTC m=+0.321884515 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:52:28 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99780]: 2025-10-05 08:52:29.000590831 +0000 UTC m=+0.395989360 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:52:29 localhost podman[99780]: unhealthy Oct 5 04:52:29 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:52:29 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:52:29 localhost podman[99800]: 2025-10-05 08:52:29.019817603 +0000 UTC m=+0.393677136 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Oct 5 04:52:29 localhost podman[99800]: 2025-10-05 08:52:29.028703136 +0000 UTC m=+0.402562659 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, name=rhosp17/openstack-cron, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4) Oct 5 04:52:29 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:52:29 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:52:29 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99806]: 2025-10-05 08:52:29.132964341 +0000 UTC m=+0.497638165 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:52:29 localhost podman[99806]: unhealthy Oct 5 04:52:29 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:52:29 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:52:29 localhost podman[99779]: 2025-10-05 08:52:29.083926937 +0000 UTC m=+0.479297325 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, vcs-type=git, container_name=collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9) Oct 5 04:52:29 localhost podman[99779]: 2025-10-05 08:52:29.215068163 +0000 UTC m=+0.610438511 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=collectd, version=17.1.9) Oct 5 04:52:29 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99785]: 2025-10-05 08:52:29.277860362 +0000 UTC m=+0.656216988 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:52:29 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:52:36 localhost podman[99949]: 2025-10-05 08:52:36.673849653 +0000 UTC m=+0.084038096 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:52:36 localhost podman[99949]: 2025-10-05 08:52:36.7318617 +0000 UTC m=+0.142050153 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, container_name=nova_compute, release=1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 04:52:36 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:52:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:52:49 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:52:49 localhost recover_tripleo_nova_virtqemud[99978]: 62622 Oct 5 04:52:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:52:49 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:52:49 localhost systemd[1]: tmp-crun.UklMzZ.mount: Deactivated successfully. Oct 5 04:52:49 localhost podman[99975]: 2025-10-05 08:52:49.69460073 +0000 UTC m=+0.098457648 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, version=17.1.9, build-date=2025-07-21T13:07:59) Oct 5 04:52:49 localhost podman[99975]: 2025-10-05 08:52:49.897055496 +0000 UTC m=+0.300912384 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, version=17.1.9, container_name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git) Oct 5 04:52:49 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:52:54 localhost systemd[1]: session-29.scope: Deactivated successfully. Oct 5 04:52:54 localhost systemd[1]: session-29.scope: Consumed 7min 21.789s CPU time. Oct 5 04:52:54 localhost systemd-logind[760]: Session 29 logged out. Waiting for processes to exit. Oct 5 04:52:54 localhost systemd-logind[760]: Removed session 29. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:52:59 localhost podman[100034]: 2025-10-05 08:52:59.740837917 +0000 UTC m=+0.097793931 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Oct 5 04:52:59 localhost podman[100006]: 2025-10-05 08:52:59.716837433 +0000 UTC m=+0.108772119 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, container_name=collectd, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc.) Oct 5 04:52:59 localhost podman[100034]: 2025-10-05 08:52:59.780082524 +0000 UTC m=+0.137038528 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent) Oct 5 04:52:59 localhost podman[100008]: 2025-10-05 08:52:59.755803623 +0000 UTC m=+0.145560939 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git) Oct 5 04:52:59 localhost podman[100032]: 2025-10-05 08:52:59.819696291 +0000 UTC m=+0.186822982 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron) Oct 5 04:52:59 localhost podman[100034]: unhealthy Oct 5 04:52:59 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:52:59 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:52:59 localhost podman[100028]: 2025-10-05 08:52:59.88068815 +0000 UTC m=+0.243940995 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public) Oct 5 04:52:59 localhost podman[100008]: 2025-10-05 08:52:59.887211327 +0000 UTC m=+0.276968663 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15) Oct 5 04:52:59 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:52:59 localhost podman[100006]: 2025-10-05 08:52:59.901447274 +0000 UTC m=+0.293381960 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=collectd, vcs-type=git, vendor=Red Hat, Inc., release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 5 04:52:59 localhost podman[100032]: 2025-10-05 08:52:59.909891624 +0000 UTC m=+0.277018305 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, distribution-scope=public, name=rhosp17/openstack-cron, release=1, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:52:59 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:52:59 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:52:59 localhost podman[100009]: 2025-10-05 08:52:59.785432309 +0000 UTC m=+0.169522041 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:52:59 localhost podman[100028]: 2025-10-05 08:52:59.944422223 +0000 UTC m=+0.307675068 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:52:59 localhost podman[100019]: 2025-10-05 08:52:59.804590371 +0000 UTC m=+0.177863429 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, distribution-scope=public) Oct 5 04:52:59 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:52:59 localhost podman[100009]: 2025-10-05 08:52:59.96860552 +0000 UTC m=+0.352695222 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, tcib_managed=true) Oct 5 04:52:59 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:53:00 localhost podman[100007]: 2025-10-05 08:53:00.017876731 +0000 UTC m=+0.407837123 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, container_name=ovn_controller) Oct 5 04:53:00 localhost podman[100007]: 2025-10-05 08:53:00.037016821 +0000 UTC m=+0.426977303 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller) Oct 5 04:53:00 localhost podman[100007]: unhealthy Oct 5 04:53:00 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:00 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:53:00 localhost podman[100019]: 2025-10-05 08:53:00.207810446 +0000 UTC m=+0.581083494 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:53:00 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:53:04 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 5 04:53:04 localhost systemd[35553]: Activating special unit Exit the Session... Oct 5 04:53:04 localhost systemd[35553]: Removed slice User Background Tasks Slice. Oct 5 04:53:04 localhost systemd[35553]: Stopped target Main User Target. Oct 5 04:53:04 localhost systemd[35553]: Stopped target Basic System. Oct 5 04:53:04 localhost systemd[35553]: Stopped target Paths. Oct 5 04:53:04 localhost systemd[35553]: Stopped target Sockets. Oct 5 04:53:04 localhost systemd[35553]: Stopped target Timers. Oct 5 04:53:04 localhost systemd[35553]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 04:53:04 localhost systemd[35553]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:53:04 localhost systemd[35553]: Closed D-Bus User Message Bus Socket. Oct 5 04:53:04 localhost systemd[35553]: Stopped Create User's Volatile Files and Directories. Oct 5 04:53:04 localhost systemd[35553]: Removed slice User Application Slice. Oct 5 04:53:04 localhost systemd[35553]: Reached target Shutdown. Oct 5 04:53:04 localhost systemd[35553]: Finished Exit the Session. Oct 5 04:53:04 localhost systemd[35553]: Reached target Exit the Session. Oct 5 04:53:04 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 5 04:53:04 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 5 04:53:04 localhost systemd[1]: user@1003.service: Consumed 4.199s CPU time, read 0B from disk, written 7.0K to disk. Oct 5 04:53:04 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 5 04:53:04 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 5 04:53:04 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 5 04:53:04 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 5 04:53:04 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 5 04:53:04 localhost systemd[1]: user-1003.slice: Consumed 7min 26.017s CPU time. Oct 5 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:53:07 localhost systemd[1]: tmp-crun.3QlcFv.mount: Deactivated successfully. Oct 5 04:53:07 localhost podman[100180]: 2025-10-05 08:53:07.672270369 +0000 UTC m=+0.074201898 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=nova_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:53:07 localhost podman[100180]: 2025-10-05 08:53:07.719235417 +0000 UTC m=+0.121166926 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public) Oct 5 04:53:07 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:53:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:53:20 localhost podman[100282]: 2025-10-05 08:53:20.688990537 +0000 UTC m=+0.097401310 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=metrics_qdr, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1) Oct 5 04:53:20 localhost podman[100282]: 2025-10-05 08:53:20.91819176 +0000 UTC m=+0.326602523 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:53:20 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:53:30 localhost systemd[1]: tmp-crun.OZp8Ml.mount: Deactivated successfully. Oct 5 04:53:30 localhost podman[100309]: 2025-10-05 08:53:30.722406515 +0000 UTC m=+0.121303520 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd) Oct 5 04:53:30 localhost systemd[1]: tmp-crun.pEO40k.mount: Deactivated successfully. Oct 5 04:53:30 localhost podman[100312]: 2025-10-05 08:53:30.7443122 +0000 UTC m=+0.119629004 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:53:30 localhost podman[100310]: 2025-10-05 08:53:30.809884273 +0000 UTC m=+0.206398233 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:53:30 localhost podman[100334]: 2025-10-05 08:53:30.764564221 +0000 UTC m=+0.143618396 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=logrotate_crond, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:53:30 localhost podman[100318]: 2025-10-05 08:53:30.786173189 +0000 UTC m=+0.160893756 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, release=1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Oct 5 04:53:30 localhost podman[100310]: 2025-10-05 08:53:30.877469682 +0000 UTC m=+0.273983672 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:53:30 localhost podman[100310]: unhealthy Oct 5 04:53:30 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:30 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:53:30 localhost podman[100334]: 2025-10-05 08:53:30.896624602 +0000 UTC m=+0.275678797 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, version=17.1.9, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:53:30 localhost podman[100337]: 2025-10-05 08:53:30.857652063 +0000 UTC m=+0.227350444 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53) Oct 5 04:53:30 localhost podman[100311]: 2025-10-05 08:53:30.919802153 +0000 UTC m=+0.313476796 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Oct 5 04:53:30 localhost podman[100311]: 2025-10-05 08:53:30.928311975 +0000 UTC m=+0.321986658 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:53:30 localhost podman[100337]: 2025-10-05 08:53:30.937311859 +0000 UTC m=+0.307010180 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:53:30 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:53:30 localhost podman[100337]: unhealthy Oct 5 04:53:30 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:30 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:53:30 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:53:30 localhost podman[100312]: 2025-10-05 08:53:30.979482336 +0000 UTC m=+0.354799150 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:53:30 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:53:30 localhost podman[100309]: 2025-10-05 08:53:30.994622607 +0000 UTC m=+0.393519592 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=2, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:53:31 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100325]: 2025-10-05 08:53:30.87962479 +0000 UTC m=+0.261764869 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi) Oct 5 04:53:31 localhost podman[100325]: 2025-10-05 08:53:31.060577531 +0000 UTC m=+0.442717570 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:53:31 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100318]: 2025-10-05 08:53:31.171544759 +0000 UTC m=+0.546265376 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 5 04:53:31 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:53:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:53:38 localhost podman[100479]: 2025-10-05 08:53:38.673387851 +0000 UTC m=+0.082644878 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Oct 5 04:53:38 localhost podman[100479]: 2025-10-05 08:53:38.703897181 +0000 UTC m=+0.113154208 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, version=17.1.9, vcs-type=git, build-date=2025-07-21T14:48:37, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:53:38 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:53:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:53:51 localhost podman[100506]: 2025-10-05 08:53:51.680406734 +0000 UTC m=+0.084797538 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, version=17.1.9, config_id=tripleo_step1, maintainer=OpenStack TripleO Team) Oct 5 04:53:51 localhost podman[100506]: 2025-10-05 08:53:51.893104327 +0000 UTC m=+0.297495121 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, release=1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9) Oct 5 04:53:51 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:54:01 localhost systemd[1]: tmp-crun.jY3oev.mount: Deactivated successfully. Oct 5 04:54:01 localhost podman[100535]: 2025-10-05 08:54:01.697583759 +0000 UTC m=+0.103880136 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, version=17.1.9, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, release=2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible) Oct 5 04:54:01 localhost podman[100537]: 2025-10-05 08:54:01.799895901 +0000 UTC m=+0.198988432 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:54:01 localhost podman[100539]: 2025-10-05 08:54:01.752626636 +0000 UTC m=+0.145650512 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, container_name=nova_migration_target, architecture=x86_64, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.component=openstack-nova-compute-container) Oct 5 04:54:01 localhost podman[100547]: 2025-10-05 08:54:01.719914306 +0000 UTC m=+0.108983985 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.9, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team) Oct 5 04:54:01 localhost podman[100536]: 2025-10-05 08:54:01.77854743 +0000 UTC m=+0.177686002 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:54:01 localhost podman[100536]: 2025-10-05 08:54:01.858509225 +0000 UTC m=+0.257647757 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, release=1) Oct 5 04:54:01 localhost podman[100536]: unhealthy Oct 5 04:54:01 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:01 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:54:01 localhost podman[100547]: 2025-10-05 08:54:01.903422817 +0000 UTC m=+0.292492556 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:54:01 localhost podman[100547]: unhealthy Oct 5 04:54:01 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:01 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:54:01 localhost podman[100538]: 2025-10-05 08:54:01.920166782 +0000 UTC m=+0.315353127 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:54:01 localhost podman[100535]: 2025-10-05 08:54:01.932464726 +0000 UTC m=+0.338761103 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:54:01 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100546]: 2025-10-05 08:54:02.010725805 +0000 UTC m=+0.403799693 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container) Oct 5 04:54:02 localhost podman[100546]: 2025-10-05 08:54:02.021028945 +0000 UTC m=+0.414102823 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4) Oct 5 04:54:02 localhost podman[100538]: 2025-10-05 08:54:02.029470945 +0000 UTC m=+0.424657170 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:54:02 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:54:02 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100544]: 2025-10-05 08:54:02.063714356 +0000 UTC m=+0.450124703 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64) Oct 5 04:54:02 localhost podman[100537]: 2025-10-05 08:54:02.084795809 +0000 UTC m=+0.483888310 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container) Oct 5 04:54:02 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100539]: 2025-10-05 08:54:02.127942242 +0000 UTC m=+0.520966198 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:54:02 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100544]: 2025-10-05 08:54:02.141696316 +0000 UTC m=+0.528106704 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:54:02 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:54:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:54:09 localhost systemd[1]: tmp-crun.wnexpI.mount: Deactivated successfully. Oct 5 04:54:09 localhost podman[100705]: 2025-10-05 08:54:09.675576297 +0000 UTC m=+0.082229137 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, architecture=x86_64) Oct 5 04:54:09 localhost podman[100705]: 2025-10-05 08:54:09.729585356 +0000 UTC m=+0.136238216 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9) Oct 5 04:54:09 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:54:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:54:22 localhost podman[100808]: 2025-10-05 08:54:22.693815576 +0000 UTC m=+0.094567332 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9) Oct 5 04:54:22 localhost podman[100808]: 2025-10-05 08:54:22.896319014 +0000 UTC m=+0.297070780 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:54:22 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:54:32 localhost systemd[1]: tmp-crun.zl8MgW.mount: Deactivated successfully. Oct 5 04:54:32 localhost podman[100839]: 2025-10-05 08:54:32.767811488 +0000 UTC m=+0.160018693 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, tcib_managed=true, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 5 04:54:32 localhost podman[100866]: 2025-10-05 08:54:32.779023002 +0000 UTC m=+0.147921922 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git) Oct 5 04:54:32 localhost podman[100839]: 2025-10-05 08:54:32.799174061 +0000 UTC m=+0.191381266 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:54:32 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:54:32 localhost podman[100837]: 2025-10-05 08:54:32.72782211 +0000 UTC m=+0.127594190 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, vcs-type=git, managed_by=tripleo_ansible, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:54:32 localhost podman[100837]: 2025-10-05 08:54:32.85944972 +0000 UTC m=+0.259221790 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, vcs-type=git, build-date=2025-07-21T13:04:03) Oct 5 04:54:32 localhost podman[100838]: 2025-10-05 08:54:32.819383701 +0000 UTC m=+0.214756672 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, container_name=ovn_controller, version=17.1.9, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Oct 5 04:54:32 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:54:32 localhost podman[100840]: 2025-10-05 08:54:32.881972502 +0000 UTC m=+0.273140288 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, release=1, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4) Oct 5 04:54:32 localhost podman[100861]: 2025-10-05 08:54:32.846347833 +0000 UTC m=+0.218565005 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, tcib_managed=true, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 5 04:54:32 localhost podman[100866]: 2025-10-05 08:54:32.913346345 +0000 UTC m=+0.282245295 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=logrotate_crond, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, version=17.1.9) Oct 5 04:54:32 localhost podman[100861]: 2025-10-05 08:54:32.924639402 +0000 UTC m=+0.296856564 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12) Oct 5 04:54:32 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:54:32 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:54:32 localhost podman[100840]: 2025-10-05 08:54:32.935838467 +0000 UTC m=+0.327006313 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T14:45:33) Oct 5 04:54:32 localhost podman[100838]: 2025-10-05 08:54:32.949322774 +0000 UTC m=+0.344695755 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.buildah.version=1.33.12, vcs-type=git, container_name=ovn_controller) Oct 5 04:54:32 localhost podman[100870]: 2025-10-05 08:54:32.74509544 +0000 UTC m=+0.111140003 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ovn_metadata_agent, architecture=x86_64, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:54:32 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:54:32 localhost podman[100838]: unhealthy Oct 5 04:54:32 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:32 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:54:32 localhost podman[100846]: 2025-10-05 08:54:32.710575701 +0000 UTC m=+0.092470755 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, batch=17.1_20250721.1) Oct 5 04:54:33 localhost podman[100870]: 2025-10-05 08:54:33.029749431 +0000 UTC m=+0.395793994 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:54:33 localhost podman[100870]: unhealthy Oct 5 04:54:33 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:33 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:54:33 localhost podman[100846]: 2025-10-05 08:54:33.06536805 +0000 UTC m=+0.447263154 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:54:33 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:54:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:54:40 localhost podman[101006]: 2025-10-05 08:54:40.676225704 +0000 UTC m=+0.085653149 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container) Oct 5 04:54:40 localhost podman[101006]: 2025-10-05 08:54:40.710775695 +0000 UTC m=+0.120203230 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, release=1, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:54:40 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:54:50 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:54:50 localhost recover_tripleo_nova_virtqemud[101035]: 62622 Oct 5 04:54:50 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:54:50 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:54:53 localhost podman[101036]: 2025-10-05 08:54:53.709889282 +0000 UTC m=+0.110961159 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:54:53 localhost podman[101036]: 2025-10-05 08:54:53.916973024 +0000 UTC m=+0.318044961 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:54:53 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:55:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:55:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:55:03 localhost systemd[1]: tmp-crun.9KcUk0.mount: Deactivated successfully. Oct 5 04:55:03 localhost podman[101065]: 2025-10-05 08:55:03.722485892 +0000 UTC m=+0.123200711 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, release=2, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 5 04:55:03 localhost podman[101074]: 2025-10-05 08:55:03.770518958 +0000 UTC m=+0.142208447 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_migration_target) Oct 5 04:55:03 localhost podman[101069]: 2025-10-05 08:55:03.776751027 +0000 UTC m=+0.163024284 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, tcib_managed=true, managed_by=tripleo_ansible) Oct 5 04:55:03 localhost podman[101069]: 2025-10-05 08:55:03.813676912 +0000 UTC m=+0.199950169 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:55:03 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:55:03 localhost podman[101082]: 2025-10-05 08:55:03.817949908 +0000 UTC m=+0.190196983 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, batch=17.1_20250721.1, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git) Oct 5 04:55:03 localhost podman[101088]: 2025-10-05 08:55:03.874195428 +0000 UTC m=+0.249975659 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-cron) Oct 5 04:55:03 localhost podman[101065]: 2025-10-05 08:55:03.887712805 +0000 UTC m=+0.288427674 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, container_name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, release=2, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:55:03 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:55:03 localhost podman[101082]: 2025-10-05 08:55:03.900097963 +0000 UTC m=+0.272345048 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:55:03 localhost podman[101088]: 2025-10-05 08:55:03.908710237 +0000 UTC m=+0.284490458 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:55:03 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:55:03 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:55:03 localhost podman[101066]: 2025-10-05 08:55:03.96802877 +0000 UTC m=+0.359196939 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:55:03 localhost podman[101066]: 2025-10-05 08:55:03.985855834 +0000 UTC m=+0.377023983 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true) Oct 5 04:55:03 localhost podman[101073]: 2025-10-05 08:55:03.986345087 +0000 UTC m=+0.369434057 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:55:03 localhost podman[101066]: unhealthy Oct 5 04:55:04 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:04 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:55:04 localhost podman[101094]: 2025-10-05 08:55:04.034829667 +0000 UTC m=+0.404084981 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53) Oct 5 04:55:04 localhost podman[101094]: 2025-10-05 08:55:04.054708327 +0000 UTC m=+0.423963661 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:55:04 localhost podman[101094]: unhealthy Oct 5 04:55:04 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:04 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:55:04 localhost podman[101073]: 2025-10-05 08:55:04.118974775 +0000 UTC m=+0.502063725 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 04:55:04 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:55:04 localhost podman[101074]: 2025-10-05 08:55:04.135774842 +0000 UTC m=+0.507464351 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, version=17.1.9, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 04:55:04 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:55:04 localhost systemd[1]: tmp-crun.Ru80oF.mount: Deactivated successfully. Oct 5 04:55:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:55:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5443 writes, 24K keys, 5443 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5443 writes, 719 syncs, 7.57 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8 writes, 16 keys, 8 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 8 writes, 4 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:55:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:55:11 localhost podman[101238]: 2025-10-05 08:55:11.691090137 +0000 UTC m=+0.096280900 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Oct 5 04:55:11 localhost podman[101238]: 2025-10-05 08:55:11.756842225 +0000 UTC m=+0.162032998 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, vcs-type=git, container_name=nova_compute, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., release=1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 5 04:55:11 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:55:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:55:24 localhost podman[101340]: 2025-10-05 08:55:24.688529389 +0000 UTC m=+0.090957104 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20250721.1) Oct 5 04:55:24 localhost podman[101340]: 2025-10-05 08:55:24.886863773 +0000 UTC m=+0.289291478 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:55:24 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:55:34 localhost systemd[1]: tmp-crun.cyJGOM.mount: Deactivated successfully. Oct 5 04:55:34 localhost podman[101371]: 2025-10-05 08:55:34.719757378 +0000 UTC m=+0.109511239 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, release=1, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, distribution-scope=public, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:55:34 localhost podman[101371]: 2025-10-05 08:55:34.731070546 +0000 UTC m=+0.120824427 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid) Oct 5 04:55:34 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:55:34 localhost podman[101391]: 2025-10-05 08:55:34.733322307 +0000 UTC m=+0.108386158 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Oct 5 04:55:34 localhost podman[101369]: 2025-10-05 08:55:34.795922619 +0000 UTC m=+0.193859662 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:04:03) Oct 5 04:55:34 localhost podman[101391]: 2025-10-05 08:55:34.81687131 +0000 UTC m=+0.191935231 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public) Oct 5 04:55:34 localhost podman[101391]: unhealthy Oct 5 04:55:34 localhost podman[101384]: 2025-10-05 08:55:34.775487424 +0000 UTC m=+0.153124706 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:55:34 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:34 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:55:34 localhost podman[101384]: 2025-10-05 08:55:34.86067609 +0000 UTC m=+0.238313372 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:55:34 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:55:34 localhost podman[101374]: 2025-10-05 08:55:34.827393176 +0000 UTC m=+0.204510363 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:55:34 localhost podman[101369]: 2025-10-05 08:55:34.879743469 +0000 UTC m=+0.277680522 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, release=2, com.redhat.component=openstack-collectd-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:55:34 localhost podman[101383]: 2025-10-05 08:55:34.882317519 +0000 UTC m=+0.259997132 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 04:55:34 localhost podman[101390]: 2025-10-05 08:55:34.929527583 +0000 UTC m=+0.302312822 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, tcib_managed=true, release=1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team) Oct 5 04:55:34 localhost podman[101390]: 2025-10-05 08:55:34.941667463 +0000 UTC m=+0.314452712 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, io.openshift.expose-services=, release=1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team) Oct 5 04:55:34 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:55:34 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:55:34 localhost podman[101370]: 2025-10-05 08:55:34.995155507 +0000 UTC m=+0.390575982 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, container_name=ovn_controller, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller) Oct 5 04:55:35 localhost podman[101374]: 2025-10-05 08:55:35.008976613 +0000 UTC m=+0.386093800 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:55:35 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:55:35 localhost podman[101370]: 2025-10-05 08:55:35.037818388 +0000 UTC m=+0.433238863 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:55:35 localhost podman[101370]: unhealthy Oct 5 04:55:35 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:35 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:55:35 localhost podman[101383]: 2025-10-05 08:55:35.265023057 +0000 UTC m=+0.642702660 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:55:35 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:55:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:55:42 localhost podman[101538]: 2025-10-05 08:55:42.693342809 +0000 UTC m=+0.101790749 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1, config_id=tripleo_step5, managed_by=tripleo_ansible) Oct 5 04:55:42 localhost podman[101538]: 2025-10-05 08:55:42.72461678 +0000 UTC m=+0.133064690 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible) Oct 5 04:55:42 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:55:50 localhost sshd[101563]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:55:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:55:55 localhost podman[101565]: 2025-10-05 08:55:55.686141915 +0000 UTC m=+0.088898287 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, release=1, io.buildah.version=1.33.12, config_id=tripleo_step1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:55:55 localhost podman[101565]: 2025-10-05 08:55:55.8859682 +0000 UTC m=+0.288724552 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:55:55 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:56:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:56:05 localhost recover_tripleo_nova_virtqemud[101642]: 62622 Oct 5 04:56:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:56:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:56:05 localhost podman[101596]: 2025-10-05 08:56:05.714621439 +0000 UTC m=+0.114097524 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, tcib_managed=true, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:05 localhost podman[101615]: 2025-10-05 08:56:05.720866429 +0000 UTC m=+0.089920136 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:56:05 localhost podman[101596]: 2025-10-05 08:56:05.750984238 +0000 UTC m=+0.150460353 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:56:05 localhost podman[101596]: unhealthy Oct 5 04:56:05 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:05 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:56:05 localhost podman[101604]: 2025-10-05 08:56:05.764602779 +0000 UTC m=+0.150426812 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:05 localhost podman[101615]: 2025-10-05 08:56:05.822910505 +0000 UTC m=+0.191964242 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47) Oct 5 04:56:05 localhost podman[101595]: 2025-10-05 08:56:05.823023018 +0000 UTC m=+0.229170014 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=collectd, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:56:05 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:56:05 localhost podman[101624]: 2025-10-05 08:56:05.87644448 +0000 UTC m=+0.244167551 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true) Oct 5 04:56:05 localhost podman[101624]: 2025-10-05 08:56:05.961829592 +0000 UTC m=+0.329552663 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, name=rhosp17/openstack-cron, release=1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Oct 5 04:56:05 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:56:05 localhost podman[101597]: 2025-10-05 08:56:05.927951131 +0000 UTC m=+0.325457662 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:56:06 localhost podman[101629]: 2025-10-05 08:56:05.980197102 +0000 UTC m=+0.349945668 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git) Oct 5 04:56:06 localhost podman[101603]: 2025-10-05 08:56:06.04374224 +0000 UTC m=+0.429861191 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:56:06 localhost podman[101595]: 2025-10-05 08:56:06.056824235 +0000 UTC m=+0.462971281 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-collectd, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, release=2) Oct 5 04:56:06 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:56:06 localhost podman[101603]: 2025-10-05 08:56:06.07755657 +0000 UTC m=+0.463675531 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:06 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:56:06 localhost podman[101629]: 2025-10-05 08:56:06.111237676 +0000 UTC m=+0.480986302 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:06 localhost podman[101629]: unhealthy Oct 5 04:56:06 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:06 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:56:06 localhost podman[101604]: 2025-10-05 08:56:06.153010932 +0000 UTC m=+0.538834965 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:56:06 localhost podman[101597]: 2025-10-05 08:56:06.161562504 +0000 UTC m=+0.559069075 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Oct 5 04:56:06 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:56:06 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:56:13 localhost podman[101769]: 2025-10-05 08:56:13.695672593 +0000 UTC m=+0.101466790 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:56:13 localhost systemd[1]: tmp-crun.513G0V.mount: Deactivated successfully. Oct 5 04:56:13 localhost podman[101769]: 2025-10-05 08:56:13.718140954 +0000 UTC m=+0.123935111 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true) Oct 5 04:56:13 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:56:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:56:26 localhost podman[101872]: 2025-10-05 08:56:26.683545647 +0000 UTC m=+0.082973387 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:56:26 localhost podman[101872]: 2025-10-05 08:56:26.877939034 +0000 UTC m=+0.277366784 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 5 04:56:26 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:56:36 localhost systemd[1]: tmp-crun.x03EM3.mount: Deactivated successfully. Oct 5 04:56:36 localhost podman[101919]: 2025-10-05 08:56:36.763034786 +0000 UTC m=+0.140993245 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, config_id=tripleo_step4, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond) Oct 5 04:56:36 localhost podman[101919]: 2025-10-05 08:56:36.774684664 +0000 UTC m=+0.152643143 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:52, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:36 localhost podman[101910]: 2025-10-05 08:56:36.781692574 +0000 UTC m=+0.161896394 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:56:36 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:56:36 localhost podman[101902]: 2025-10-05 08:56:36.698608544 +0000 UTC m=+0.100275137 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, vcs-type=git) Oct 5 04:56:36 localhost podman[101910]: 2025-10-05 08:56:36.804923746 +0000 UTC m=+0.185127596 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team) Oct 5 04:56:36 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:56:36 localhost podman[101903]: 2025-10-05 08:56:36.819154982 +0000 UTC m=+0.214930615 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:56:36 localhost podman[101904]: 2025-10-05 08:56:36.725100295 +0000 UTC m=+0.114884395 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=iscsid, distribution-scope=public, vcs-type=git, architecture=x86_64) Oct 5 04:56:36 localhost podman[101911]: 2025-10-05 08:56:36.858072511 +0000 UTC m=+0.246339920 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, container_name=nova_migration_target, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:56:36 localhost podman[101918]: 2025-10-05 08:56:36.807913547 +0000 UTC m=+0.191391816 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:36 localhost podman[101903]: 2025-10-05 08:56:36.862142802 +0000 UTC m=+0.257918385 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, container_name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:56:36 localhost podman[101903]: unhealthy Oct 5 04:56:36 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:36 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:56:36 localhost podman[101902]: 2025-10-05 08:56:36.879379731 +0000 UTC m=+0.281046314 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, io.buildah.version=1.33.12, container_name=collectd, vcs-type=git, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.openshift.expose-services=) Oct 5 04:56:36 localhost podman[101904]: 2025-10-05 08:56:36.909595283 +0000 UTC m=+0.299379333 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1) Oct 5 04:56:36 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:56:36 localhost podman[101918]: 2025-10-05 08:56:36.943806213 +0000 UTC m=+0.327284512 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:56:36 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:56:36 localhost podman[101926]: 2025-10-05 08:56:36.969101601 +0000 UTC m=+0.348536469 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:56:36 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:56:37 localhost podman[101926]: 2025-10-05 08:56:37.03820456 +0000 UTC m=+0.417639508 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1, config_id=tripleo_step4, distribution-scope=public) Oct 5 04:56:37 localhost podman[101926]: unhealthy Oct 5 04:56:37 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:37 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:56:37 localhost podman[101911]: 2025-10-05 08:56:37.224305161 +0000 UTC m=+0.612572630 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1) Oct 5 04:56:37 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:56:44 localhost systemd[1]: tmp-crun.3bnnP1.mount: Deactivated successfully. Oct 5 04:56:44 localhost podman[102078]: 2025-10-05 08:56:44.670163758 +0000 UTC m=+0.081552939 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1, architecture=x86_64, container_name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9) Oct 5 04:56:44 localhost podman[102078]: 2025-10-05 08:56:44.691700734 +0000 UTC m=+0.103089855 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 5 04:56:44 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:56:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:56:57 localhost podman[102105]: 2025-10-05 08:56:57.67271608 +0000 UTC m=+0.083403259 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:56:57 localhost podman[102105]: 2025-10-05 08:56:57.857953888 +0000 UTC m=+0.268641067 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, container_name=metrics_qdr, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:56:57 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:57:07 localhost systemd[1]: tmp-crun.0Esyon.mount: Deactivated successfully. Oct 5 04:57:07 localhost podman[102150]: 2025-10-05 08:57:07.745620858 +0000 UTC m=+0.126535112 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, distribution-scope=public) Oct 5 04:57:07 localhost podman[102136]: 2025-10-05 08:57:07.706175426 +0000 UTC m=+0.095816077 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:57:07 localhost podman[102136]: 2025-10-05 08:57:07.792815402 +0000 UTC m=+0.182456063 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git) Oct 5 04:57:07 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:57:07 localhost podman[102134]: 2025-10-05 08:57:07.806691739 +0000 UTC m=+0.203017161 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, release=2, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team) Oct 5 04:57:07 localhost podman[102135]: 2025-10-05 08:57:07.858319413 +0000 UTC m=+0.249776903 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:57:07 localhost podman[102150]: 2025-10-05 08:57:07.866377262 +0000 UTC m=+0.247291665 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.9, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:57:07 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:57:07 localhost podman[102173]: 2025-10-05 08:57:07.834479275 +0000 UTC m=+0.200575476 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, release=1) Oct 5 04:57:07 localhost podman[102143]: 2025-10-05 08:57:07.725562383 +0000 UTC m=+0.105696536 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.9, container_name=nova_migration_target, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, vcs-type=git) Oct 5 04:57:07 localhost podman[102135]: 2025-10-05 08:57:07.921366888 +0000 UTC m=+0.312824388 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:57:07 localhost podman[102135]: unhealthy Oct 5 04:57:07 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:07 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:57:07 localhost podman[102138]: 2025-10-05 08:57:07.934235218 +0000 UTC m=+0.312252393 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:57:07 localhost podman[102134]: 2025-10-05 08:57:07.93688579 +0000 UTC m=+0.333211202 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, vcs-type=git, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:57:07 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:57:07 localhost podman[102138]: 2025-10-05 08:57:07.984386781 +0000 UTC m=+0.362403966 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:57:07 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:57:07 localhost podman[102155]: 2025-10-05 08:57:07.997308323 +0000 UTC m=+0.366358824 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, container_name=logrotate_crond) Oct 5 04:57:08 localhost podman[102173]: 2025-10-05 08:57:08.015854748 +0000 UTC m=+0.381950949 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:57:08 localhost podman[102173]: unhealthy Oct 5 04:57:08 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:08 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:57:08 localhost podman[102155]: 2025-10-05 08:57:08.026714613 +0000 UTC m=+0.395765094 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=logrotate_crond, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:57:08 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:57:08 localhost podman[102143]: 2025-10-05 08:57:08.093846169 +0000 UTC m=+0.473980332 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 04:57:08 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:57:15 localhost podman[102308]: 2025-10-05 08:57:15.686321736 +0000 UTC m=+0.092809335 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, vendor=Red Hat, Inc.) Oct 5 04:57:15 localhost podman[102308]: 2025-10-05 08:57:15.739864652 +0000 UTC m=+0.146352171 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, release=1, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 04:57:15 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:57:19 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:57:19 localhost recover_tripleo_nova_virtqemud[102351]: 62622 Oct 5 04:57:19 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:57:19 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:57:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:57:28 localhost systemd[1]: tmp-crun.Rtfy4K.mount: Deactivated successfully. Oct 5 04:57:28 localhost podman[102467]: 2025-10-05 08:57:28.714045162 +0000 UTC m=+0.123175981 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:57:28 localhost podman[102467]: 2025-10-05 08:57:28.931882906 +0000 UTC m=+0.341013765 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step1, distribution-scope=public, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:57:28 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:57:38 localhost systemd[1]: tmp-crun.ZF3557.mount: Deactivated successfully. Oct 5 04:57:38 localhost systemd[1]: tmp-crun.01Zfv7.mount: Deactivated successfully. Oct 5 04:57:38 localhost podman[102496]: 2025-10-05 08:57:38.714565125 +0000 UTC m=+0.104462342 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, distribution-scope=public, container_name=iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:57:38 localhost podman[102494]: 2025-10-05 08:57:38.819766646 +0000 UTC m=+0.217076865 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, version=17.1.9, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Oct 5 04:57:38 localhost podman[102494]: 2025-10-05 08:57:38.834133917 +0000 UTC m=+0.231444136 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:57:38 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:57:38 localhost podman[102496]: 2025-10-05 08:57:38.847712986 +0000 UTC m=+0.237610233 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:57:38 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:57:38 localhost podman[102508]: 2025-10-05 08:57:38.883124258 +0000 UTC m=+0.250932474 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:57:38 localhost podman[102507]: 2025-10-05 08:57:38.923484707 +0000 UTC m=+0.301357307 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_migration_target, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:57:38 localhost podman[102508]: 2025-10-05 08:57:38.935706219 +0000 UTC m=+0.303514465 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.) Oct 5 04:57:38 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:57:38 localhost podman[102498]: 2025-10-05 08:57:38.78977269 +0000 UTC m=+0.176899491 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:57:39 localhost podman[102498]: 2025-10-05 08:57:39.023955919 +0000 UTC m=+0.411082760 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:57:39 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102495]: 2025-10-05 08:57:39.070364091 +0000 UTC m=+0.465740307 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 04:57:39 localhost podman[102495]: 2025-10-05 08:57:39.079093988 +0000 UTC m=+0.474470194 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, name=rhosp17/openstack-ovn-controller) Oct 5 04:57:39 localhost podman[102495]: unhealthy Oct 5 04:57:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:57:39 localhost podman[102514]: 2025-10-05 08:57:39.119008233 +0000 UTC m=+0.492456182 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-cron, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:57:39 localhost podman[102520]: 2025-10-05 08:57:38.735497704 +0000 UTC m=+0.108844351 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 04:57:39 localhost podman[102514]: 2025-10-05 08:57:39.154625402 +0000 UTC m=+0.528073351 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:57:39 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102520]: 2025-10-05 08:57:39.177785452 +0000 UTC m=+0.551132089 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, release=1) Oct 5 04:57:39 localhost podman[102520]: unhealthy Oct 5 04:57:39 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:39 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:57:39 localhost podman[102507]: 2025-10-05 08:57:39.308668191 +0000 UTC m=+0.686540751 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:57:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:57:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:57:46 localhost podman[102662]: 2025-10-05 08:57:46.680514947 +0000 UTC m=+0.086567045 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5) Oct 5 04:57:46 localhost podman[102662]: 2025-10-05 08:57:46.713852454 +0000 UTC m=+0.119904512 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37, distribution-scope=public, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible) Oct 5 04:57:46 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:57:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:57:59 localhost systemd[1]: tmp-crun.fzaYM2.mount: Deactivated successfully. Oct 5 04:57:59 localhost podman[102686]: 2025-10-05 08:57:59.672926734 +0000 UTC m=+0.085428394 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:57:59 localhost podman[102686]: 2025-10-05 08:57:59.87171217 +0000 UTC m=+0.284213710 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1, config_id=tripleo_step1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Oct 5 04:57:59 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:58:09 localhost systemd[1]: tmp-crun.ZfZiD4.mount: Deactivated successfully. Oct 5 04:58:09 localhost podman[102723]: 2025-10-05 08:58:09.699408564 +0000 UTC m=+0.094367128 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, container_name=nova_migration_target, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:58:09 localhost podman[102718]: 2025-10-05 08:58:09.756687451 +0000 UTC m=+0.149551038 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:58:09 localhost podman[102716]: 2025-10-05 08:58:09.794831059 +0000 UTC m=+0.194879821 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, release=1) Oct 5 04:58:09 localhost podman[102716]: 2025-10-05 08:58:09.807470582 +0000 UTC m=+0.207519334 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:58:09 localhost podman[102729]: 2025-10-05 08:58:09.824477904 +0000 UTC m=+0.213033904 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, version=17.1.9, architecture=x86_64, release=1, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 5 04:58:09 localhost podman[102746]: 2025-10-05 08:58:09.857086161 +0000 UTC m=+0.237837109 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:58:09 localhost podman[102716]: unhealthy Oct 5 04:58:09 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:09 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:58:09 localhost podman[102718]: 2025-10-05 08:58:09.884793925 +0000 UTC m=+0.277657512 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:58:09 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:58:09 localhost podman[102729]: 2025-10-05 08:58:09.904903572 +0000 UTC m=+0.293459582 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:58:09 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:58:09 localhost podman[102717]: 2025-10-05 08:58:09.96624884 +0000 UTC m=+0.355600741 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 5 04:58:09 localhost podman[102746]: 2025-10-05 08:58:09.992251547 +0000 UTC m=+0.373002495 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, version=17.1.9, distribution-scope=public, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:58:09 localhost podman[102717]: 2025-10-05 08:58:09.998573079 +0000 UTC m=+0.387924980 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:58:10 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:58:10 localhost podman[102735]: 2025-10-05 08:58:09.834173629 +0000 UTC m=+0.216292683 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4) Oct 5 04:58:10 localhost podman[102746]: unhealthy Oct 5 04:58:10 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:10 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:58:10 localhost podman[102723]: 2025-10-05 08:58:10.064520243 +0000 UTC m=+0.459478777 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, distribution-scope=public, architecture=x86_64) Oct 5 04:58:10 localhost podman[102735]: 2025-10-05 08:58:10.066895217 +0000 UTC m=+0.449014271 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, container_name=logrotate_crond, vcs-type=git, version=17.1.9, release=1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:58:10 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:58:10 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:58:10 localhost podman[102715]: 2025-10-05 08:58:10.068598273 +0000 UTC m=+0.472819478 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, container_name=collectd, release=2, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:58:10 localhost podman[102715]: 2025-10-05 08:58:10.198955718 +0000 UTC m=+0.603176853 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, vcs-type=git, architecture=x86_64) Oct 5 04:58:10 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:58:17 localhost podman[102878]: 2025-10-05 08:58:17.667486763 +0000 UTC m=+0.079525303 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, container_name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:58:17 localhost podman[102878]: 2025-10-05 08:58:17.720227378 +0000 UTC m=+0.132265928 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12) Oct 5 04:58:17 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:58:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:58:30 localhost podman[102980]: 2025-10-05 08:58:30.68189932 +0000 UTC m=+0.086603877 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, container_name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.openshift.expose-services=) Oct 5 04:58:30 localhost podman[102980]: 2025-10-05 08:58:30.879669787 +0000 UTC m=+0.284374364 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, version=17.1.9) Oct 5 04:58:30 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:58:40 localhost podman[103011]: 2025-10-05 08:58:40.714449672 +0000 UTC m=+0.113259450 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Oct 5 04:58:40 localhost systemd[1]: tmp-crun.yJUpIC.mount: Deactivated successfully. Oct 5 04:58:40 localhost podman[103029]: 2025-10-05 08:58:40.762723915 +0000 UTC m=+0.147060680 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public) Oct 5 04:58:40 localhost podman[103012]: 2025-10-05 08:58:40.833742937 +0000 UTC m=+0.231070236 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.9, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, build-date=2025-07-21T13:27:15) Oct 5 04:58:40 localhost podman[103012]: 2025-10-05 08:58:40.872319345 +0000 UTC m=+0.269646554 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:58:40 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:58:40 localhost podman[103011]: 2025-10-05 08:58:40.887631742 +0000 UTC m=+0.286441540 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller) Oct 5 04:58:40 localhost podman[103011]: unhealthy Oct 5 04:58:40 localhost podman[103029]: 2025-10-05 08:58:40.896251176 +0000 UTC m=+0.280587971 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Oct 5 04:58:40 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:40 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:58:40 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:58:40 localhost podman[103019]: 2025-10-05 08:58:40.87357512 +0000 UTC m=+0.259401206 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_migration_target) Oct 5 04:58:40 localhost podman[103010]: 2025-10-05 08:58:40.977157857 +0000 UTC m=+0.377516508 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=2, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3) Oct 5 04:58:41 localhost podman[103010]: 2025-10-05 08:58:41.015988033 +0000 UTC m=+0.416346664 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, release=2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:58:41 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[103013]: 2025-10-05 08:58:41.031841843 +0000 UTC m=+0.419991032 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, release=1) Oct 5 04:58:41 localhost podman[103032]: 2025-10-05 08:58:41.079091078 +0000 UTC m=+0.458874630 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1, version=17.1.9, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team) Oct 5 04:58:41 localhost podman[103032]: 2025-10-05 08:58:41.092854413 +0000 UTC m=+0.472637965 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, release=1, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond) Oct 5 04:58:41 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[103013]: 2025-10-05 08:58:41.143310375 +0000 UTC m=+0.531459564 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Oct 5 04:58:41 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[103037]: 2025-10-05 08:58:40.789846853 +0000 UTC m=+0.167203469 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-07-21T16:28:53, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:58:41 localhost podman[103037]: 2025-10-05 08:58:41.228497961 +0000 UTC m=+0.605854577 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:58:41 localhost podman[103037]: unhealthy Oct 5 04:58:41 localhost podman[103019]: 2025-10-05 08:58:41.236870129 +0000 UTC m=+0.622696255 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4) Oct 5 04:58:41 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:41 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:58:41 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:58:48 localhost systemd[1]: tmp-crun.4vWQyZ.mount: Deactivated successfully. Oct 5 04:58:48 localhost podman[103183]: 2025-10-05 08:58:48.683023316 +0000 UTC m=+0.091262502 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.9, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:58:48 localhost podman[103183]: 2025-10-05 08:58:48.711545332 +0000 UTC m=+0.119784458 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:58:48 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:59:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:59:00 localhost recover_tripleo_nova_virtqemud[103211]: 62622 Oct 5 04:59:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:59:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:59:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:59:01 localhost podman[103212]: 2025-10-05 08:59:01.67905347 +0000 UTC m=+0.089937667 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:59:01 localhost podman[103212]: 2025-10-05 08:59:01.880037235 +0000 UTC m=+0.290921412 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 04:59:01 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:59:11 localhost podman[103242]: 2025-10-05 08:59:11.713560147 +0000 UTC m=+0.107934877 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-type=git, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:59:11 localhost podman[103242]: 2025-10-05 08:59:11.722780178 +0000 UTC m=+0.117154908 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:59:11 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:59:11 localhost podman[103241]: 2025-10-05 08:59:11.765714055 +0000 UTC m=+0.166024386 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true) Oct 5 04:59:11 localhost systemd[1]: tmp-crun.Ie4vvP.mount: Deactivated successfully. Oct 5 04:59:11 localhost podman[103262]: 2025-10-05 08:59:11.822469548 +0000 UTC m=+0.204910833 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container) Oct 5 04:59:11 localhost podman[103262]: 2025-10-05 08:59:11.83356026 +0000 UTC m=+0.216001535 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=logrotate_crond, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git) Oct 5 04:59:11 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:59:11 localhost podman[103241]: 2025-10-05 08:59:11.853305427 +0000 UTC m=+0.253615798 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, container_name=ovn_controller, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true) Oct 5 04:59:11 localhost podman[103241]: unhealthy Oct 5 04:59:11 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:11 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:59:11 localhost podman[103248]: 2025-10-05 08:59:11.870496145 +0000 UTC m=+0.261972976 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container) Oct 5 04:59:11 localhost podman[103273]: 2025-10-05 08:59:11.926818457 +0000 UTC m=+0.300175514 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:59:11 localhost podman[103273]: 2025-10-05 08:59:11.970793422 +0000 UTC m=+0.344150509 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:59:11 localhost podman[103243]: 2025-10-05 08:59:11.970296818 +0000 UTC m=+0.364408270 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9) Oct 5 04:59:11 localhost podman[103273]: unhealthy Oct 5 04:59:11 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:11 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:59:11 localhost podman[103240]: 2025-10-05 08:59:11.993362226 +0000 UTC m=+0.395434365 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, release=2, version=17.1.9, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:59:12 localhost podman[103240]: 2025-10-05 08:59:12.003724428 +0000 UTC m=+0.405796537 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, container_name=collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Oct 5 04:59:12 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103256]: 2025-10-05 08:59:12.087547637 +0000 UTC m=+0.473066626 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:59:12 localhost podman[103243]: 2025-10-05 08:59:12.103771889 +0000 UTC m=+0.497883291 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1) Oct 5 04:59:12 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103256]: 2025-10-05 08:59:12.145039531 +0000 UTC m=+0.530558460 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:59:12 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103248]: 2025-10-05 08:59:12.24799595 +0000 UTC m=+0.639472791 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, release=1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:59:12 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:59:19 localhost podman[103406]: 2025-10-05 08:59:19.685816269 +0000 UTC m=+0.092913058 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:59:19 localhost podman[103406]: 2025-10-05 08:59:19.745101112 +0000 UTC m=+0.152197861 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:59:19 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 04:59:23 localhost podman[103533]: 2025-10-05 08:59:23.943710171 +0000 UTC m=+0.086034290 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:59:24 localhost podman[103533]: 2025-10-05 08:59:24.066299545 +0000 UTC m=+0.208623614 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, version=7, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 04:59:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 04:59:32 localhost podman[103677]: 2025-10-05 08:59:32.681347211 +0000 UTC m=+0.091411376 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:59:32 localhost podman[103677]: 2025-10-05 08:59:32.910818622 +0000 UTC m=+0.320882777 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:59:32 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 04:59:42 localhost systemd[1]: tmp-crun.s3dlCi.mount: Deactivated successfully. Oct 5 04:59:42 localhost podman[103708]: 2025-10-05 08:59:42.711678616 +0000 UTC m=+0.101602054 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:59:42 localhost podman[103733]: 2025-10-05 08:59:42.773583209 +0000 UTC m=+0.148858349 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=logrotate_crond, name=rhosp17/openstack-cron) Oct 5 04:59:42 localhost podman[103734]: 2025-10-05 08:59:42.78354381 +0000 UTC m=+0.149922928 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, distribution-scope=public) Oct 5 04:59:42 localhost podman[103708]: 2025-10-05 08:59:42.796270036 +0000 UTC m=+0.186193454 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:59:42 localhost podman[103733]: 2025-10-05 08:59:42.808215151 +0000 UTC m=+0.183490261 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, release=1, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:59:42 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 04:59:42 localhost podman[103734]: 2025-10-05 08:59:42.817990267 +0000 UTC m=+0.184369375 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:59:42 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 04:59:42 localhost podman[103734]: unhealthy Oct 5 04:59:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 04:59:42 localhost podman[103709]: 2025-10-05 08:59:42.871673177 +0000 UTC m=+0.260679621 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:59:42 localhost podman[103707]: 2025-10-05 08:59:42.740815248 +0000 UTC m=+0.134857469 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, release=1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:59:42 localhost podman[103709]: 2025-10-05 08:59:42.925628174 +0000 UTC m=+0.314634588 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:59:42 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 04:59:42 localhost podman[103707]: 2025-10-05 08:59:42.971725958 +0000 UTC m=+0.365768169 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:59:42 localhost podman[103706]: 2025-10-05 08:59:42.977827234 +0000 UTC m=+0.374313341 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-collectd-container, container_name=collectd) Oct 5 04:59:42 localhost podman[103707]: unhealthy Oct 5 04:59:42 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:42 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 04:59:43 localhost podman[103715]: 2025-10-05 08:59:42.927662489 +0000 UTC m=+0.307102613 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64) Oct 5 04:59:43 localhost podman[103728]: 2025-10-05 08:59:42.877909656 +0000 UTC m=+0.253365441 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, release=1) Oct 5 04:59:43 localhost podman[103728]: 2025-10-05 08:59:43.109824503 +0000 UTC m=+0.485280318 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:59:43 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103706]: 2025-10-05 08:59:43.158672431 +0000 UTC m=+0.555158488 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd) Oct 5 04:59:43 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103715]: 2025-10-05 08:59:43.325248652 +0000 UTC m=+0.704688766 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git) Oct 5 04:59:43 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 04:59:43 localhost systemd[1]: tmp-crun.fy6S28.mount: Deactivated successfully. Oct 5 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 04:59:50 localhost systemd[1]: tmp-crun.SDotME.mount: Deactivated successfully. Oct 5 04:59:50 localhost podman[103883]: 2025-10-05 08:59:50.692677047 +0000 UTC m=+0.095194110 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, config_id=tripleo_step5) Oct 5 04:59:50 localhost podman[103883]: 2025-10-05 08:59:50.747830796 +0000 UTC m=+0.150347839 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=) Oct 5 04:59:50 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 05:00:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:00:03 localhost podman[103913]: 2025-10-05 09:00:03.680605831 +0000 UTC m=+0.091101578 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, release=1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:00:03 localhost podman[103913]: 2025-10-05 09:00:03.877213837 +0000 UTC m=+0.287709564 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, version=17.1.9) Oct 5 05:00:03 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:00:13 localhost systemd[1]: tmp-crun.SIyMSp.mount: Deactivated successfully. Oct 5 05:00:13 localhost systemd[1]: tmp-crun.kXssgJ.mount: Deactivated successfully. Oct 5 05:00:13 localhost podman[103947]: 2025-10-05 09:00:13.682048847 +0000 UTC m=+0.088993982 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Oct 5 05:00:13 localhost podman[103944]: 2025-10-05 09:00:13.737387222 +0000 UTC m=+0.146935676 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, release=2, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 05:00:13 localhost podman[103944]: 2025-10-05 09:00:13.750616852 +0000 UTC m=+0.160165296 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=collectd, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, release=2) Oct 5 05:00:13 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:00:13 localhost podman[103961]: 2025-10-05 09:00:13.710869541 +0000 UTC m=+0.106912879 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, release=1) Oct 5 05:00:13 localhost podman[103947]: 2025-10-05 09:00:13.766699179 +0000 UTC m=+0.173644324 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, release=1, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9) Oct 5 05:00:13 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:00:13 localhost podman[103961]: 2025-10-05 09:00:13.792694756 +0000 UTC m=+0.188738124 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9) Oct 5 05:00:13 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 05:00:13 localhost podman[103966]: 2025-10-05 09:00:13.850108677 +0000 UTC m=+0.246733171 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-cron, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 05:00:13 localhost podman[103966]: 2025-10-05 09:00:13.885809488 +0000 UTC m=+0.282433932 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Oct 5 05:00:13 localhost podman[103968]: 2025-10-05 09:00:13.893796986 +0000 UTC m=+0.278904907 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:00:13 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:00:13 localhost podman[103968]: 2025-10-05 09:00:13.909697817 +0000 UTC m=+0.294805728 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:28:53, vcs-type=git, container_name=ovn_metadata_agent, architecture=x86_64) Oct 5 05:00:13 localhost podman[103968]: unhealthy Oct 5 05:00:13 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:13 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:00:13 localhost podman[103945]: 2025-10-05 09:00:13.958812224 +0000 UTC m=+0.368271806 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Oct 5 05:00:14 localhost podman[103945]: 2025-10-05 09:00:14.000773455 +0000 UTC m=+0.410232997 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Oct 5 05:00:14 localhost podman[103945]: unhealthy Oct 5 05:00:14 localhost podman[103948]: 2025-10-05 09:00:14.006336876 +0000 UTC m=+0.400039660 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 05:00:14 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:14 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:00:14 localhost podman[103946]: 2025-10-05 09:00:14.047870586 +0000 UTC m=+0.436077101 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64) Oct 5 05:00:14 localhost podman[103946]: 2025-10-05 09:00:14.059873872 +0000 UTC m=+0.448080377 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid) Oct 5 05:00:14 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:00:14 localhost podman[103948]: 2025-10-05 09:00:14.389829025 +0000 UTC m=+0.783531769 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:00:14 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:00:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:00:20 localhost recover_tripleo_nova_virtqemud[104117]: 62622 Oct 5 05:00:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:00:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:00:21 localhost podman[104118]: 2025-10-05 09:00:21.693822455 +0000 UTC m=+0.098936621 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:00:21 localhost podman[104118]: 2025-10-05 09:00:21.753948111 +0000 UTC m=+0.159062257 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 5 05:00:21 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 05:00:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:00:34 localhost podman[104221]: 2025-10-05 09:00:34.671729846 +0000 UTC m=+0.076197363 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, release=1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:00:34 localhost podman[104221]: 2025-10-05 09:00:34.881905402 +0000 UTC m=+0.286372949 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:00:34 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:00:44 localhost podman[104257]: 2025-10-05 09:00:44.715601296 +0000 UTC m=+0.105925812 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., release=1, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 05:00:44 localhost podman[104257]: 2025-10-05 09:00:44.757696511 +0000 UTC m=+0.148021057 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:45:33) Oct 5 05:00:44 localhost podman[104268]: 2025-10-05 09:00:44.771465425 +0000 UTC m=+0.154568724 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, version=17.1.9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Oct 5 05:00:44 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:00:44 localhost podman[104268]: 2025-10-05 09:00:44.796682071 +0000 UTC m=+0.179785400 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 05:00:44 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 05:00:44 localhost podman[104259]: 2025-10-05 09:00:44.814037543 +0000 UTC m=+0.202169039 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:00:44 localhost podman[104273]: 2025-10-05 09:00:44.729045742 +0000 UTC m=+0.107934917 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron) Oct 5 05:00:44 localhost podman[104273]: 2025-10-05 09:00:44.859645234 +0000 UTC m=+0.238534409 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12) Oct 5 05:00:44 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:00:44 localhost podman[104282]: 2025-10-05 09:00:44.877087578 +0000 UTC m=+0.252642422 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:00:44 localhost podman[104252]: 2025-10-05 09:00:44.900010981 +0000 UTC m=+0.295695983 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, version=17.1.9, batch=17.1_20250721.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 05:00:44 localhost podman[104252]: 2025-10-05 09:00:44.911592396 +0000 UTC m=+0.307277398 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 05:00:44 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:00:44 localhost podman[104282]: 2025-10-05 09:00:44.922921404 +0000 UTC m=+0.298476258 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, version=17.1.9, managed_by=tripleo_ansible) Oct 5 05:00:44 localhost podman[104282]: unhealthy Oct 5 05:00:44 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:44 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:00:44 localhost podman[104250]: 2025-10-05 09:00:44.926969985 +0000 UTC m=+0.329542314 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Oct 5 05:00:44 localhost podman[104251]: 2025-10-05 09:00:44.974155517 +0000 UTC m=+0.368280816 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Oct 5 05:00:44 localhost podman[104251]: 2025-10-05 09:00:44.990894862 +0000 UTC m=+0.385020171 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=) Oct 5 05:00:44 localhost podman[104251]: unhealthy Oct 5 05:00:44 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:44 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:00:45 localhost podman[104250]: 2025-10-05 09:00:45.011775031 +0000 UTC m=+0.414347330 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:00:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:00:45 localhost podman[104259]: 2025-10-05 09:00:45.207256117 +0000 UTC m=+0.595387553 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 05:00:45 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:00:52 localhost podman[104424]: 2025-10-05 09:00:52.688250651 +0000 UTC m=+0.098805408 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 05:00:52 localhost podman[104424]: 2025-10-05 09:00:52.714191196 +0000 UTC m=+0.124745963 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, vcs-type=git) Oct 5 05:00:52 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 05:01:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:01:05 localhost podman[104476]: 2025-10-05 09:01:05.668879017 +0000 UTC m=+0.079160130 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:01:05 localhost podman[104476]: 2025-10-05 09:01:05.878830331 +0000 UTC m=+0.289111464 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64) Oct 5 05:01:05 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:01:15 localhost podman[104505]: 2025-10-05 09:01:15.677851079 +0000 UTC m=+0.085329838 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., release=2) Oct 5 05:01:15 localhost podman[104532]: 2025-10-05 09:01:15.76559531 +0000 UTC m=+0.147324353 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:01:15 localhost podman[104532]: 2025-10-05 09:01:15.781811118 +0000 UTC m=+0.163540151 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, build-date=2025-07-21T16:28:53, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, vcs-type=git) Oct 5 05:01:15 localhost podman[104526]: 2025-10-05 09:01:15.784673656 +0000 UTC m=+0.170681294 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 05:01:15 localhost podman[104532]: unhealthy Oct 5 05:01:15 localhost podman[104520]: 2025-10-05 09:01:15.739903326 +0000 UTC m=+0.123699604 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Oct 5 05:01:15 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:15 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:01:15 localhost podman[104506]: 2025-10-05 09:01:15.708427225 +0000 UTC m=+0.106348125 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true) Oct 5 05:01:15 localhost podman[104520]: 2025-10-05 09:01:15.820885054 +0000 UTC m=+0.204681322 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Oct 5 05:01:15 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 05:01:15 localhost podman[104514]: 2025-10-05 09:01:15.838988943 +0000 UTC m=+0.231505077 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:01:15 localhost podman[104505]: 2025-10-05 09:01:15.891092752 +0000 UTC m=+0.298571571 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, container_name=collectd, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 05:01:15 localhost podman[104513]: 2025-10-05 09:01:15.897451163 +0000 UTC m=+0.282414584 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, release=1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 05:01:15 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:01:15 localhost podman[104526]: 2025-10-05 09:01:15.917953257 +0000 UTC m=+0.303960915 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 05:01:15 localhost podman[104513]: 2025-10-05 09:01:15.926142528 +0000 UTC m=+0.311105899 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute) Oct 5 05:01:15 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:01:15 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:01:15 localhost podman[104507]: 2025-10-05 09:01:15.963982002 +0000 UTC m=+0.349256671 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Oct 5 05:01:15 localhost podman[104507]: 2025-10-05 09:01:15.973587051 +0000 UTC m=+0.358861720 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:01:15 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:01:15 localhost podman[104506]: 2025-10-05 09:01:15.992706248 +0000 UTC m=+0.390627158 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:01:15 localhost podman[104506]: unhealthy Oct 5 05:01:16 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:16 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:01:16 localhost podman[104514]: 2025-10-05 09:01:16.181874511 +0000 UTC m=+0.574390665 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 05:01:16 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:01:16 localhost systemd[1]: tmp-crun.EtYmp7.mount: Deactivated successfully. Oct 5 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:01:23 localhost podman[104679]: 2025-10-05 09:01:23.678489213 +0000 UTC m=+0.085289686 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 05:01:23 localhost podman[104679]: 2025-10-05 09:01:23.707769364 +0000 UTC m=+0.114569817 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-type=git, release=1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 5 05:01:23 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 05:01:27 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:01:27 localhost recover_tripleo_nova_virtqemud[104723]: 62622 Oct 5 05:01:27 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:01:27 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:01:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:01:36 localhost podman[104786]: 2025-10-05 09:01:36.69542753 +0000 UTC m=+0.095917163 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 05:01:36 localhost podman[104786]: 2025-10-05 09:01:36.889620619 +0000 UTC m=+0.290110292 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 05:01:36 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:01:46 localhost systemd[1]: tmp-crun.Kc1pR7.mount: Deactivated successfully. Oct 5 05:01:46 localhost podman[104830]: 2025-10-05 09:01:46.725501144 +0000 UTC m=+0.097535057 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 05:01:46 localhost systemd[1]: tmp-crun.lH69pZ.mount: Deactivated successfully. Oct 5 05:01:46 localhost podman[104830]: 2025-10-05 09:01:46.778739933 +0000 UTC m=+0.150773816 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 05:01:46 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 05:01:46 localhost podman[104815]: 2025-10-05 09:01:46.830767548 +0000 UTC m=+0.223227374 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 05:01:46 localhost podman[104829]: 2025-10-05 09:01:46.781523108 +0000 UTC m=+0.159357758 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 05:01:46 localhost podman[104817]: 2025-10-05 09:01:46.867944613 +0000 UTC m=+0.250353597 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, release=1, container_name=iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Oct 5 05:01:46 localhost podman[104842]: 2025-10-05 09:01:46.874976853 +0000 UTC m=+0.239765181 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:07:52, distribution-scope=public) Oct 5 05:01:46 localhost podman[104818]: 2025-10-05 09:01:46.929543207 +0000 UTC m=+0.311035077 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc.) Oct 5 05:01:46 localhost podman[104842]: 2025-10-05 09:01:46.935564951 +0000 UTC m=+0.300353359 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 05:01:46 localhost podman[104852]: 2025-10-05 09:01:46.942558019 +0000 UTC m=+0.299595568 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, container_name=ovn_metadata_agent, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:01:46 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:01:46 localhost podman[104818]: 2025-10-05 09:01:46.969726014 +0000 UTC m=+0.351217914 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, release=1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 05:01:46 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:01:46 localhost podman[104816]: 2025-10-05 09:01:46.981013439 +0000 UTC m=+0.370478943 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container) Oct 5 05:01:46 localhost podman[104852]: 2025-10-05 09:01:46.984443541 +0000 UTC m=+0.341481030 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git) Oct 5 05:01:46 localhost podman[104852]: unhealthy Oct 5 05:01:46 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:46 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:01:46 localhost podman[104816]: 2025-10-05 09:01:46.997557156 +0000 UTC m=+0.387022670 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, container_name=ovn_controller) Oct 5 05:01:47 localhost podman[104816]: unhealthy Oct 5 05:01:47 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:47 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:01:47 localhost podman[104815]: 2025-10-05 09:01:47.022754557 +0000 UTC m=+0.415214393 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, release=2, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, distribution-scope=public, container_name=collectd, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 05:01:47 localhost podman[104817]: 2025-10-05 09:01:47.036998892 +0000 UTC m=+0.419407826 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15) Oct 5 05:01:47 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:01:47 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:01:47 localhost podman[104829]: 2025-10-05 09:01:47.122814741 +0000 UTC m=+0.500649441 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:01:47 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:01:54 localhost podman[104982]: 2025-10-05 09:01:54.678996373 +0000 UTC m=+0.085395660 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-nova-compute-container) Oct 5 05:01:54 localhost podman[104982]: 2025-10-05 09:01:54.734895413 +0000 UTC m=+0.141294720 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Oct 5 05:01:54 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 05:02:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:02:07 localhost podman[105008]: 2025-10-05 09:02:07.686629086 +0000 UTC m=+0.090758833 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, release=1) Oct 5 05:02:07 localhost podman[105008]: 2025-10-05 09:02:07.915130592 +0000 UTC m=+0.319260419 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=metrics_qdr, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 05:02:07 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:02:17 localhost podman[105039]: 2025-10-05 09:02:17.709033222 +0000 UTC m=+0.103772516 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, container_name=iscsid, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12) Oct 5 05:02:17 localhost podman[105039]: 2025-10-05 09:02:17.720876051 +0000 UTC m=+0.115615395 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:02:17 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:02:17 localhost systemd[1]: tmp-crun.IXdnGp.mount: Deactivated successfully. Oct 5 05:02:17 localhost podman[105067]: 2025-10-05 09:02:17.771334145 +0000 UTC m=+0.147312373 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, architecture=x86_64, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 05:02:17 localhost podman[105067]: 2025-10-05 09:02:17.820890835 +0000 UTC m=+0.196869063 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:02:17 localhost podman[105067]: unhealthy Oct 5 05:02:17 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:17 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:02:17 localhost podman[105046]: 2025-10-05 09:02:17.874998527 +0000 UTC m=+0.261214881 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:02:17 localhost podman[105056]: 2025-10-05 09:02:17.827565785 +0000 UTC m=+0.208864756 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:02:17 localhost podman[105056]: 2025-10-05 09:02:17.911906534 +0000 UTC m=+0.293205445 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 05:02:17 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Deactivated successfully. Oct 5 05:02:17 localhost podman[105037]: 2025-10-05 09:02:17.917776433 +0000 UTC m=+0.320759600 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:02:17 localhost podman[105038]: 2025-10-05 09:02:17.973895439 +0000 UTC m=+0.370158275 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:02:17 localhost podman[105063]: 2025-10-05 09:02:17.986654175 +0000 UTC m=+0.364278736 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, release=1, managed_by=tripleo_ansible) Oct 5 05:02:18 localhost podman[105037]: 2025-10-05 09:02:18.005305208 +0000 UTC m=+0.408288355 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, release=2) Oct 5 05:02:18 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[105063]: 2025-10-05 09:02:18.022668798 +0000 UTC m=+0.400293329 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, container_name=logrotate_crond, batch=17.1_20250721.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 05:02:18 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[105038]: 2025-10-05 09:02:18.044160988 +0000 UTC m=+0.440423824 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 05:02:18 localhost podman[105038]: unhealthy Oct 5 05:02:18 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:18 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:02:18 localhost podman[105042]: 2025-10-05 09:02:18.045949057 +0000 UTC m=+0.437736251 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9) Oct 5 05:02:18 localhost podman[105042]: 2025-10-05 09:02:18.125688851 +0000 UTC m=+0.517476075 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:02:18 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[105046]: 2025-10-05 09:02:18.26180288 +0000 UTC m=+0.648019244 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:02:18 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:02:25 localhost systemd[1]: tmp-crun.ORWAbd.mount: Deactivated successfully. Oct 5 05:02:25 localhost podman[105205]: 2025-10-05 09:02:25.684790284 +0000 UTC m=+0.087632500 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, release=1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, tcib_managed=true, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37) Oct 5 05:02:25 localhost podman[105205]: 2025-10-05 09:02:25.714924809 +0000 UTC m=+0.117767015 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, release=1, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, architecture=x86_64, tcib_managed=true) Oct 5 05:02:25 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Deactivated successfully. Oct 5 05:02:27 localhost sshd[105230]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:02:38 localhost podman[105310]: 2025-10-05 09:02:38.690019184 +0000 UTC m=+0.094137724 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:02:38 localhost podman[105310]: 2025-10-05 09:02:38.873783111 +0000 UTC m=+0.277901651 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 05:02:38 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:02:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:02:40 localhost recover_tripleo_nova_virtqemud[105340]: 62622 Oct 5 05:02:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:02:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:02:48 localhost podman[105343]: 2025-10-05 09:02:48.71691919 +0000 UTC m=+0.109495440 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, distribution-scope=public, tcib_managed=true) Oct 5 05:02:48 localhost podman[105363]: 2025-10-05 09:02:48.781359021 +0000 UTC m=+0.153452667 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 05:02:48 localhost podman[105362]: 2025-10-05 09:02:48.831999311 +0000 UTC m=+0.207617963 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 05:02:48 localhost podman[105344]: 2025-10-05 09:02:48.733031036 +0000 UTC m=+0.114894457 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, release=1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, config_id=tripleo_step4) Oct 5 05:02:48 localhost podman[105362]: 2025-10-05 09:02:48.841436906 +0000 UTC m=+0.217055578 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Oct 5 05:02:48 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:02:48 localhost podman[105341]: 2025-10-05 09:02:48.762059011 +0000 UTC m=+0.161304671 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., release=2, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 05:02:48 localhost podman[105344]: 2025-10-05 09:02:48.864640022 +0000 UTC m=+0.246503503 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 05:02:48 localhost podman[105363]: 2025-10-05 09:02:48.908121398 +0000 UTC m=+0.280215144 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git) Oct 5 05:02:48 localhost podman[105363]: unhealthy Oct 5 05:02:48 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:48 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:02:48 localhost podman[105350]: 2025-10-05 09:02:48.926073683 +0000 UTC m=+0.313151015 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 05:02:48 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:02:48 localhost podman[105341]: 2025-10-05 09:02:48.946944267 +0000 UTC m=+0.346189997 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, tcib_managed=true) Oct 5 05:02:48 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:02:49 localhost podman[105342]: 2025-10-05 09:02:49.00366602 +0000 UTC m=+0.400237748 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 05:02:49 localhost podman[105356]: 2025-10-05 09:02:49.043455486 +0000 UTC m=+0.415885921 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Oct 5 05:02:49 localhost podman[105342]: 2025-10-05 09:02:49.050907487 +0000 UTC m=+0.447479125 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, release=1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:02:49 localhost podman[105342]: unhealthy Oct 5 05:02:49 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:49 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:02:49 localhost podman[105356]: 2025-10-05 09:02:49.080862836 +0000 UTC m=+0.453293281 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, release=1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 05:02:49 localhost podman[105356]: unhealthy Oct 5 05:02:49 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:49 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 05:02:49 localhost podman[105343]: 2025-10-05 09:02:49.112181553 +0000 UTC m=+0.504757783 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, container_name=iscsid) Oct 5 05:02:49 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:02:49 localhost podman[105350]: 2025-10-05 09:02:49.315737684 +0000 UTC m=+0.702814956 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 05:02:49 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:02:56 localhost systemd[1]: tmp-crun.QyDAym.mount: Deactivated successfully. Oct 5 05:02:56 localhost podman[105514]: 2025-10-05 09:02:56.68956084 +0000 UTC m=+0.097561917 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-type=git) Oct 5 05:02:56 localhost podman[105514]: 2025-10-05 09:02:56.713750604 +0000 UTC m=+0.121751741 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:02:56 localhost podman[105514]: unhealthy Oct 5 05:02:56 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:56 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:03:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:03:09 localhost podman[105536]: 2025-10-05 09:03:09.676561097 +0000 UTC m=+0.085713077 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, release=1, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, container_name=metrics_qdr, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:03:09 localhost podman[105536]: 2025-10-05 09:03:09.854696631 +0000 UTC m=+0.263848561 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, vcs-type=git) Oct 5 05:03:09 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:03:19 localhost systemd[1]: tmp-crun.7fOYgU.mount: Deactivated successfully. Oct 5 05:03:19 localhost podman[105573]: 2025-10-05 09:03:19.709438264 +0000 UTC m=+0.100698743 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 05:03:19 localhost podman[105573]: 2025-10-05 09:03:19.722513416 +0000 UTC m=+0.113773895 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute) Oct 5 05:03:19 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:03:19 localhost podman[105568]: 2025-10-05 09:03:19.760710779 +0000 UTC m=+0.154733382 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 05:03:19 localhost podman[105568]: 2025-10-05 09:03:19.771564522 +0000 UTC m=+0.165587135 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.openshift.expose-services=) Oct 5 05:03:19 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:03:19 localhost podman[105580]: 2025-10-05 09:03:19.807167685 +0000 UTC m=+0.191880587 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute) Oct 5 05:03:19 localhost podman[105566]: 2025-10-05 09:03:19.811560633 +0000 UTC m=+0.212850373 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, container_name=collectd, release=2, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 5 05:03:19 localhost podman[105566]: 2025-10-05 09:03:19.822578801 +0000 UTC m=+0.223868551 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03) Oct 5 05:03:19 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:03:19 localhost podman[105600]: 2025-10-05 09:03:19.87286431 +0000 UTC m=+0.244727985 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent) Oct 5 05:03:19 localhost podman[105600]: 2025-10-05 09:03:19.909068019 +0000 UTC m=+0.280931674 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:03:19 localhost podman[105600]: unhealthy Oct 5 05:03:19 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:19 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:03:19 localhost podman[105567]: 2025-10-05 09:03:19.95759418 +0000 UTC m=+0.356794834 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 5 05:03:19 localhost podman[105593]: 2025-10-05 09:03:19.909639224 +0000 UTC m=+0.287149951 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, container_name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Oct 5 05:03:19 localhost podman[105590]: 2025-10-05 09:03:19.963840829 +0000 UTC m=+0.342062506 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, batch=17.1_20250721.1) Oct 5 05:03:19 localhost podman[105593]: 2025-10-05 09:03:19.992446992 +0000 UTC m=+0.369957679 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, architecture=x86_64, release=1) Oct 5 05:03:20 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:03:20 localhost podman[105590]: 2025-10-05 09:03:20.0178882 +0000 UTC m=+0.396109907 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 05:03:20 localhost podman[105590]: unhealthy Oct 5 05:03:20 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:20 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 05:03:20 localhost podman[105567]: 2025-10-05 09:03:20.047020307 +0000 UTC m=+0.446220981 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, release=1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 05:03:20 localhost podman[105567]: unhealthy Oct 5 05:03:20 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:20 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:03:20 localhost podman[105580]: 2025-10-05 09:03:20.203946498 +0000 UTC m=+0.588659450 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 05:03:20 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:03:27 localhost podman[105738]: 2025-10-05 09:03:27.676433459 +0000 UTC m=+0.088428641 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public) Oct 5 05:03:27 localhost podman[105738]: 2025-10-05 09:03:27.696562513 +0000 UTC m=+0.108557695 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:03:27 localhost podman[105738]: unhealthy Oct 5 05:03:27 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:27 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:03:39 localhost sshd[105837]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:03:39 localhost systemd-logind[760]: New session 37 of user zuul. Oct 5 05:03:39 localhost systemd[1]: Started Session 37 of User zuul. Oct 5 05:03:40 localhost python3.9[105932]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:03:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:03:40 localhost podman[105934]: 2025-10-05 09:03:40.171587662 +0000 UTC m=+0.079016067 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-type=git, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.) Oct 5 05:03:40 localhost podman[105934]: 2025-10-05 09:03:40.353692873 +0000 UTC m=+0.261121268 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:03:40 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:03:40 localhost python3.9[106055]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:03:41 localhost python3.9[106148]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:03:42 localhost python3.9[106242]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:03:43 localhost python3.9[106335]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:03:43 localhost python3.9[106426]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Oct 5 05:03:45 localhost python3.9[106516]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:03:45 localhost python3.9[106608]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Oct 5 05:03:47 localhost python3.9[106698]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:03:47 localhost python3.9[106746]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:03:48 localhost systemd[1]: session-37.scope: Deactivated successfully. Oct 5 05:03:48 localhost systemd[1]: session-37.scope: Consumed 4.925s CPU time. Oct 5 05:03:48 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Oct 5 05:03:48 localhost systemd-logind[760]: Removed session 37. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:03:50 localhost podman[106763]: 2025-10-05 09:03:50.719487359 +0000 UTC m=+0.106346006 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:03:50 localhost systemd[1]: tmp-crun.nqJWvd.mount: Deactivated successfully. Oct 5 05:03:50 localhost podman[106763]: 2025-10-05 09:03:50.759959853 +0000 UTC m=+0.146818470 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:03:50 localhost podman[106778]: 2025-10-05 09:03:50.760356104 +0000 UTC m=+0.139545323 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20250721.1) Oct 5 05:03:50 localhost podman[106763]: unhealthy Oct 5 05:03:50 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:50 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:03:50 localhost podman[106764]: 2025-10-05 09:03:50.860534871 +0000 UTC m=+0.250584924 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, architecture=x86_64, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3) Oct 5 05:03:50 localhost podman[106766]: 2025-10-05 09:03:50.879790011 +0000 UTC m=+0.268128067 container health_status 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, tcib_managed=true) Oct 5 05:03:50 localhost podman[106776]: 2025-10-05 09:03:50.834889247 +0000 UTC m=+0.215741581 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 05:03:50 localhost podman[106772]: 2025-10-05 09:03:50.739594842 +0000 UTC m=+0.115373909 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, container_name=nova_migration_target, vcs-type=git, release=1, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:03:50 localhost podman[106778]: 2025-10-05 09:03:50.895973938 +0000 UTC m=+0.275163207 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, release=1) Oct 5 05:03:50 localhost podman[106762]: 2025-10-05 09:03:50.80351978 +0000 UTC m=+0.203140881 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, container_name=collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64) Oct 5 05:03:50 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:03:50 localhost podman[106776]: 2025-10-05 09:03:50.917768868 +0000 UTC m=+0.298621232 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, release=1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 05:03:50 localhost podman[106776]: unhealthy Oct 5 05:03:50 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:50 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 05:03:50 localhost podman[106762]: 2025-10-05 09:03:50.936955156 +0000 UTC m=+0.336576267 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=2, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:04:03) Oct 5 05:03:50 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:03:50 localhost podman[106764]: 2025-10-05 09:03:50.949808303 +0000 UTC m=+0.339858396 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:03:50 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:03:50 localhost podman[106766]: 2025-10-05 09:03:50.9881566 +0000 UTC m=+0.376494626 container exec_died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 5 05:03:51 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Deactivated successfully. Oct 5 05:03:51 localhost podman[106790]: 2025-10-05 09:03:51.040163616 +0000 UTC m=+0.410573758 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 5 05:03:51 localhost podman[106790]: 2025-10-05 09:03:51.055744856 +0000 UTC m=+0.426154988 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 05:03:51 localhost podman[106790]: unhealthy Oct 5 05:03:51 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:51 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:03:51 localhost podman[106772]: 2025-10-05 09:03:51.102822539 +0000 UTC m=+0.478601636 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:03:51 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:03:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5535 DF PROTO=TCP SPT=33598 DPT=9882 SEQ=3304236190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECAF4A20000000001030307) Oct 5 05:03:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=574 DF PROTO=TCP SPT=60708 DPT=9105 SEQ=3302877223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECAF6DE0000000001030307) Oct 5 05:03:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5536 DF PROTO=TCP SPT=33598 DPT=9882 SEQ=3304236190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECAF89D0000000001030307) Oct 5 05:03:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=575 DF PROTO=TCP SPT=60708 DPT=9105 SEQ=3302877223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECAFADD0000000001030307) Oct 5 05:03:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20215 DF PROTO=TCP SPT=36140 DPT=9101 SEQ=274814226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECAFB1E0000000001030307) Oct 5 05:03:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20216 DF PROTO=TCP SPT=36140 DPT=9101 SEQ=274814226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECAFF1E0000000001030307) Oct 5 05:03:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5537 DF PROTO=TCP SPT=33598 DPT=9882 SEQ=3304236190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB009D0000000001030307) Oct 5 05:03:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=576 DF PROTO=TCP SPT=60708 DPT=9105 SEQ=3302877223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB02DD0000000001030307) Oct 5 05:03:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20217 DF PROTO=TCP SPT=36140 DPT=9101 SEQ=274814226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB071D0000000001030307) Oct 5 05:03:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5538 DF PROTO=TCP SPT=33598 DPT=9882 SEQ=3304236190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB105E0000000001030307) Oct 5 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:03:58 localhost systemd[1]: tmp-crun.YLrb3L.mount: Deactivated successfully. Oct 5 05:03:58 localhost podman[106931]: 2025-10-05 09:03:58.666769591 +0000 UTC m=+0.076962950 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, tcib_managed=true, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 05:03:58 localhost podman[106931]: 2025-10-05 09:03:58.715119338 +0000 UTC m=+0.125312777 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, distribution-scope=public, release=1) Oct 5 05:03:58 localhost podman[106931]: unhealthy Oct 5 05:03:58 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:58 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:03:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=577 DF PROTO=TCP SPT=60708 DPT=9105 SEQ=3302877223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB129D0000000001030307) Oct 5 05:04:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20218 DF PROTO=TCP SPT=36140 DPT=9101 SEQ=274814226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB16DD0000000001030307) Oct 5 05:04:03 localhost sshd[106952]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:04:03 localhost systemd-logind[760]: New session 38 of user zuul. Oct 5 05:04:03 localhost systemd[1]: Started Session 38 of User zuul. Oct 5 05:04:04 localhost python3.9[107047]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:04:04 localhost systemd[1]: Reloading. Oct 5 05:04:04 localhost systemd-rc-local-generator[107068]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:04:04 localhost systemd-sysv-generator[107075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:04:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:04 localhost systemd[1]: Starting dnf makecache... Oct 5 05:04:05 localhost dnf[107084]: Updating Subscription Management repositories. Oct 5 05:04:05 localhost python3.9[107174]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:04:05 localhost network[107191]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:04:05 localhost network[107192]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:04:05 localhost network[107193]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:04:06 localhost dnf[107084]: Metadata cache refreshed recently. Oct 5 05:04:07 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 5 05:04:07 localhost systemd[1]: Finished dnf makecache. Oct 5 05:04:07 localhost systemd[1]: dnf-makecache.service: Consumed 2.197s CPU time. Oct 5 05:04:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:04:10 localhost podman[107301]: 2025-10-05 09:04:10.483643172 +0000 UTC m=+0.085808889 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, container_name=metrics_qdr, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public) Oct 5 05:04:10 localhost podman[107301]: 2025-10-05 09:04:10.676174806 +0000 UTC m=+0.278340553 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, release=1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container) Oct 5 05:04:10 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:04:11 localhost python3.9[107419]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:04:11 localhost network[107436]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:04:11 localhost network[107437]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:04:11 localhost network[107438]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:04:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16127 DF PROTO=TCP SPT=42172 DPT=9102 SEQ=4104871297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB4A4D0000000001030307) Oct 5 05:04:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16128 DF PROTO=TCP SPT=42172 DPT=9102 SEQ=4104871297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB4E5E0000000001030307) Oct 5 05:04:15 localhost python3.9[107637]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:04:15 localhost systemd[1]: Reloading. Oct 5 05:04:15 localhost systemd-rc-local-generator[107667]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:04:15 localhost systemd-sysv-generator[107670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:04:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:16 localhost systemd[1]: Stopping ceilometer_agent_compute container... Oct 5 05:04:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16129 DF PROTO=TCP SPT=42172 DPT=9102 SEQ=4104871297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB565D0000000001030307) Oct 5 05:04:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13931 DF PROTO=TCP SPT=55548 DPT=9100 SEQ=361991960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB57DC0000000001030307) Oct 5 05:04:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13932 DF PROTO=TCP SPT=55548 DPT=9100 SEQ=361991960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB5BDD0000000001030307) Oct 5 05:04:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13933 DF PROTO=TCP SPT=55548 DPT=9100 SEQ=361991960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB63DE0000000001030307) Oct 5 05:04:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16130 DF PROTO=TCP SPT=42172 DPT=9102 SEQ=4104871297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB661D0000000001030307) Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:04:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38455 DF PROTO=TCP SPT=58238 DPT=9882 SEQ=1742034054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB69D40000000001030307) Oct 5 05:04:21 localhost systemd[1]: tmp-crun.rJOcla.mount: Deactivated successfully. Oct 5 05:04:21 localhost podman[107694]: 2025-10-05 09:04:21.47078676 +0000 UTC m=+0.112406399 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-type=git, version=17.1.9, release=1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public) Oct 5 05:04:21 localhost podman[107694]: 2025-10-05 09:04:21.476729921 +0000 UTC m=+0.118349530 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, distribution-scope=public) Oct 5 05:04:21 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:04:21 localhost podman[107705]: Error: container 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e is not running Oct 5 05:04:21 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Main process exited, code=exited, status=125/n/a Oct 5 05:04:21 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost podman[107692]: 2025-10-05 09:04:21.579855117 +0000 UTC m=+0.229571855 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=2, config_id=tripleo_step3, batch=17.1_20250721.1) Oct 5 05:04:21 localhost podman[107693]: 2025-10-05 09:04:21.616042325 +0000 UTC m=+0.263133652 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 05:04:21 localhost podman[107716]: 2025-10-05 09:04:21.631907065 +0000 UTC m=+0.249549166 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64) Oct 5 05:04:21 localhost podman[107716]: 2025-10-05 09:04:21.643849497 +0000 UTC m=+0.261491588 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-cron) Oct 5 05:04:21 localhost podman[107693]: 2025-10-05 09:04:21.658830803 +0000 UTC m=+0.305922160 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, version=17.1.9, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 5 05:04:21 localhost podman[107693]: unhealthy Oct 5 05:04:21 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:21 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost podman[107727]: 2025-10-05 09:04:21.678458933 +0000 UTC m=+0.291090068 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 05:04:21 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:04:21 localhost podman[107710]: 2025-10-05 09:04:21.559184799 +0000 UTC m=+0.187448237 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:04:21 localhost podman[107727]: 2025-10-05 09:04:21.716157441 +0000 UTC m=+0.328788586 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:04:21 localhost podman[107727]: unhealthy Oct 5 05:04:21 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:21 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost podman[107692]: 2025-10-05 09:04:21.745636979 +0000 UTC m=+0.395353717 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=collectd, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:04:21 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:04:21 localhost podman[107712]: 2025-10-05 09:04:21.493812753 +0000 UTC m=+0.115296717 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi) Oct 5 05:04:21 localhost podman[107712]: 2025-10-05 09:04:21.828865457 +0000 UTC m=+0.450349461 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, version=17.1.9) Oct 5 05:04:21 localhost podman[107712]: unhealthy Oct 5 05:04:21 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:21 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost podman[107710]: 2025-10-05 09:04:21.912794826 +0000 UTC m=+0.541058334 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.expose-services=) Oct 5 05:04:21 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:04:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64171 DF PROTO=TCP SPT=36534 DPT=9105 SEQ=547688366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB6C0F0000000001030307) Oct 5 05:04:22 localhost systemd[1]: tmp-crun.fg23oJ.mount: Deactivated successfully. Oct 5 05:04:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38456 DF PROTO=TCP SPT=58238 DPT=9882 SEQ=1742034054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB6DDD0000000001030307) Oct 5 05:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64172 DF PROTO=TCP SPT=36534 DPT=9105 SEQ=547688366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB701E0000000001030307) Oct 5 05:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24474 DF PROTO=TCP SPT=56504 DPT=9101 SEQ=4240077557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB704F0000000001030307) Oct 5 05:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13934 DF PROTO=TCP SPT=55548 DPT=9100 SEQ=361991960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB739D0000000001030307) Oct 5 05:04:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38457 DF PROTO=TCP SPT=58238 DPT=9882 SEQ=1742034054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB75DD0000000001030307) Oct 5 05:04:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38458 DF PROTO=TCP SPT=58238 DPT=9882 SEQ=1742034054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECB859D0000000001030307) Oct 5 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:04:29 localhost systemd[1]: tmp-crun.sHI7d5.mount: Deactivated successfully. Oct 5 05:04:29 localhost podman[107846]: 2025-10-05 09:04:29.216185522 +0000 UTC m=+0.119368615 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, container_name=nova_compute, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:04:29 localhost podman[107846]: 2025-10-05 09:04:29.26042926 +0000 UTC m=+0.163612343 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true) Oct 5 05:04:29 localhost podman[107846]: unhealthy Oct 5 05:04:29 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:29 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:04:38 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:04:38 localhost recover_tripleo_nova_virtqemud[107946]: 62622 Oct 5 05:04:38 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:04:38 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:04:39 localhost sshd[107947]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:04:40 localhost podman[107948]: 2025-10-05 09:04:40.948115151 +0000 UTC m=+0.098637476 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, architecture=x86_64) Oct 5 05:04:41 localhost podman[107948]: 2025-10-05 09:04:41.147751586 +0000 UTC m=+0.298273921 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, release=1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 05:04:41 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:04:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21232 DF PROTO=TCP SPT=39616 DPT=9102 SEQ=2503276092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBBF7E0000000001030307) Oct 5 05:04:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21233 DF PROTO=TCP SPT=39616 DPT=9102 SEQ=2503276092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBC39D0000000001030307) Oct 5 05:04:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21234 DF PROTO=TCP SPT=39616 DPT=9102 SEQ=2503276092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBCB9D0000000001030307) Oct 5 05:04:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24695 DF PROTO=TCP SPT=59974 DPT=9100 SEQ=1539839601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBCD0F0000000001030307) Oct 5 05:04:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24696 DF PROTO=TCP SPT=59974 DPT=9100 SEQ=1539839601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBD11D0000000001030307) Oct 5 05:04:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24697 DF PROTO=TCP SPT=59974 DPT=9100 SEQ=1539839601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBD91E0000000001030307) Oct 5 05:04:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21235 DF PROTO=TCP SPT=39616 DPT=9102 SEQ=2503276092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBDB5D0000000001030307) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:04:51 localhost podman[107978]: Error: container 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e is not running Oct 5 05:04:51 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Main process exited, code=exited, status=125/n/a Oct 5 05:04:51 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Failed with result 'exit-code'. Oct 5 05:04:51 localhost podman[107977]: 2025-10-05 09:04:51.744602637 +0000 UTC m=+0.151860685 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:04:51 localhost podman[107977]: 2025-10-05 09:04:51.762862241 +0000 UTC m=+0.170120239 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, architecture=x86_64, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:04:51 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:04:51 localhost systemd[1]: tmp-crun.aXEtQe.mount: Deactivated successfully. Oct 5 05:04:51 localhost podman[108010]: 2025-10-05 09:04:51.86160657 +0000 UTC m=+0.094247849 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, distribution-scope=public, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:04:51 localhost podman[108011]: 2025-10-05 09:04:51.87346023 +0000 UTC m=+0.098230506 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 05:04:51 localhost podman[108012]: 2025-10-05 09:04:51.912797833 +0000 UTC m=+0.136234642 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:04:51 localhost podman[108012]: 2025-10-05 09:04:51.930115451 +0000 UTC m=+0.153552310 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9) Oct 5 05:04:51 localhost podman[108012]: unhealthy Oct 5 05:04:51 localhost podman[108011]: 2025-10-05 09:04:51.958893799 +0000 UTC m=+0.183664035 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:04:51 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:51 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:04:51 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:04:52 localhost podman[108068]: 2025-10-05 09:04:52.009676261 +0000 UTC m=+0.128694559 container health_status 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12) Oct 5 05:04:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16358 DF PROTO=TCP SPT=42602 DPT=9105 SEQ=4258405048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBE13F0000000001030307) Oct 5 05:04:52 localhost podman[108018]: 2025-10-05 09:04:51.980640946 +0000 UTC m=+0.193638804 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., container_name=collectd, release=2, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:04:52 localhost podman[108068]: 2025-10-05 09:04:52.040377331 +0000 UTC m=+0.159395639 container exec_died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-07-21T15:29:47, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12) Oct 5 05:04:52 localhost podman[108068]: unhealthy Oct 5 05:04:52 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:52 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 05:04:52 localhost podman[108018]: 2025-10-05 09:04:52.066794815 +0000 UTC m=+0.279792743 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Oct 5 05:04:52 localhost podman[108010]: 2025-10-05 09:04:52.083904057 +0000 UTC m=+0.316545286 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vcs-type=git, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:04:52 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:04:52 localhost podman[108010]: unhealthy Oct 5 05:04:52 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:52 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:04:52 localhost podman[108095]: 2025-10-05 09:04:52.175561885 +0000 UTC m=+0.191852836 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:04:52 localhost podman[108095]: 2025-10-05 09:04:52.55207477 +0000 UTC m=+0.568365701 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9) Oct 5 05:04:52 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:04:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13781 DF PROTO=TCP SPT=42292 DPT=9882 SEQ=753788427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECBEB1D0000000001030307) Oct 5 05:04:58 localhost podman[107678]: time="2025-10-05T09:04:58Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Oct 5 05:04:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:3f:b5:ce MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=40386 SEQ=1116960667 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 5 05:04:58 localhost systemd[1]: libpod-712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.scope: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: libpod-712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.scope: Consumed 6.528s CPU time. Oct 5 05:04:58 localhost podman[107678]: 2025-10-05 09:04:58.277634468 +0000 UTC m=+42.074878642 container stop 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, build-date=2025-07-21T14:45:33, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 05:04:58 localhost podman[107678]: 2025-10-05 09:04:58.310613309 +0000 UTC m=+42.107857523 container died 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 05:04:58 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.timer: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e. Oct 5 05:04:58 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Failed to open /run/systemd/transient/712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: No such file or directory Oct 5 05:04:58 localhost systemd[1]: tmp-crun.l3K9L7.mount: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e-userdata-shm.mount: Deactivated successfully. Oct 5 05:04:58 localhost podman[107678]: 2025-10-05 09:04:58.414670061 +0000 UTC m=+42.211914225 container cleanup 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 5 05:04:58 localhost podman[107678]: ceilometer_agent_compute Oct 5 05:04:58 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.timer: Failed to open /run/systemd/transient/712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.timer: No such file or directory Oct 5 05:04:58 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Failed to open /run/systemd/transient/712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: No such file or directory Oct 5 05:04:58 localhost podman[108146]: 2025-10-05 09:04:58.431354852 +0000 UTC m=+0.135515413 container cleanup 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9) Oct 5 05:04:58 localhost systemd[1]: libpod-conmon-712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.scope: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.timer: Failed to open /run/systemd/transient/712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.timer: No such file or directory Oct 5 05:04:58 localhost systemd[1]: 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: Failed to open /run/systemd/transient/712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e.service: No such file or directory Oct 5 05:04:58 localhost podman[108161]: 2025-10-05 09:04:58.534851559 +0000 UTC m=+0.069707915 container cleanup 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, distribution-scope=public, release=1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Oct 5 05:04:58 localhost podman[108161]: ceilometer_agent_compute Oct 5 05:04:58 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: Stopped ceilometer_agent_compute container. Oct 5 05:04:58 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.140s CPU time, no IO. Oct 5 05:04:59 localhost systemd[1]: var-lib-containers-storage-overlay-8570cdbf5236721445e2cc9274307680fdc914cb35802c43017704dbfad14cb1-merged.mount: Deactivated successfully. Oct 5 05:04:59 localhost python3.9[108263]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:04:59 localhost systemd[1]: Reloading. Oct 5 05:04:59 localhost podman[108265]: 2025-10-05 09:04:59.468288936 +0000 UTC m=+0.088467471 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, version=17.1.9, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=) Oct 5 05:04:59 localhost podman[108265]: 2025-10-05 09:04:59.491269757 +0000 UTC m=+0.111448292 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_id=tripleo_step5, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, maintainer=OpenStack TripleO Team) Oct 5 05:04:59 localhost podman[108265]: unhealthy Oct 5 05:04:59 localhost systemd-rc-local-generator[108309]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:04:59 localhost systemd-sysv-generator[108314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:04:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:59 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:59 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:04:59 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Oct 5 05:04:59 localhost systemd[1]: tmp-crun.UZ3dxR.mount: Deactivated successfully. Oct 5 05:05:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:05:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:05:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:3f:b5:ce MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=40386 SEQ=1116960667 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 5 05:05:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:3f:b5:ce MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=40386 SEQ=1116960667 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 5 05:05:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:05:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5451 writes, 24K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5451 writes, 723 syncs, 7.54 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8 writes, 16 keys, 8 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 8 writes, 4 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:05:11 localhost systemd[1]: tmp-crun.KrIXrF.mount: Deactivated successfully. Oct 5 05:05:11 localhost podman[108338]: 2025-10-05 09:05:11.431141094 +0000 UTC m=+0.090006624 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=metrics_qdr, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., release=1, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:05:11 localhost podman[108338]: 2025-10-05 09:05:11.637988803 +0000 UTC m=+0.296854373 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, container_name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 05:05:11 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:05:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:3f:b5:ce MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=40386 SEQ=1116960667 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 5 05:05:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13528 DF PROTO=TCP SPT=37272 DPT=9102 SEQ=2148496866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC34AD0000000001030307) Oct 5 05:05:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13529 DF PROTO=TCP SPT=37272 DPT=9102 SEQ=2148496866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC389D0000000001030307) Oct 5 05:05:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13530 DF PROTO=TCP SPT=37272 DPT=9102 SEQ=2148496866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC409D0000000001030307) Oct 5 05:05:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35446 DF PROTO=TCP SPT=40370 DPT=9100 SEQ=3144415131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC4E5E0000000001030307) Oct 5 05:05:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58555 DF PROTO=TCP SPT=35710 DPT=9105 SEQ=3495042384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC566F0000000001030307) Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:05:22 localhost podman[108367]: Error: container 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 is not running Oct 5 05:05:22 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Main process exited, code=exited, status=125/n/a Oct 5 05:05:22 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed with result 'exit-code'. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:05:22 localhost systemd[1]: tmp-crun.V5npZH.mount: Deactivated successfully. Oct 5 05:05:22 localhost systemd[1]: tmp-crun.CXogI0.mount: Deactivated successfully. Oct 5 05:05:22 localhost podman[108422]: 2025-10-05 09:05:22.280629439 +0000 UTC m=+0.086444377 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, release=1) Oct 5 05:05:22 localhost podman[108368]: 2025-10-05 09:05:22.291230275 +0000 UTC m=+0.192865984 container health_status c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 05:05:22 localhost podman[108366]: 2025-10-05 09:05:22.249734173 +0000 UTC m=+0.156353636 container health_status 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid) Oct 5 05:05:22 localhost podman[108385]: 2025-10-05 09:05:22.220486423 +0000 UTC m=+0.109365236 container health_status 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:04:03) Oct 5 05:05:22 localhost podman[108368]: 2025-10-05 09:05:22.328583795 +0000 UTC m=+0.230219474 container exec_died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 05:05:22 localhost podman[108366]: 2025-10-05 09:05:22.328798531 +0000 UTC m=+0.235417944 container exec_died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=iscsid, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:05:22 localhost podman[108369]: 2025-10-05 09:05:22.335476671 +0000 UTC m=+0.232038012 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, release=1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 05:05:22 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Deactivated successfully. Oct 5 05:05:22 localhost podman[108369]: 2025-10-05 09:05:22.351563746 +0000 UTC m=+0.248125107 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:05:22 localhost podman[108369]: unhealthy Oct 5 05:05:22 localhost podman[108422]: 2025-10-05 09:05:22.370605341 +0000 UTC m=+0.176420279 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 05:05:22 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:22 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:05:22 localhost podman[108422]: unhealthy Oct 5 05:05:22 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:22 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:05:22 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Deactivated successfully. Oct 5 05:05:22 localhost podman[108385]: 2025-10-05 09:05:22.402557264 +0000 UTC m=+0.291436037 container exec_died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=2, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:05:22 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Deactivated successfully. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:05:22 localhost podman[108473]: 2025-10-05 09:05:22.675599954 +0000 UTC m=+0.081122904 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 05:05:23 localhost podman[108473]: 2025-10-05 09:05:23.047474253 +0000 UTC m=+0.452997223 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, container_name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 05:05:23 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:05:23 localhost systemd[1]: tmp-crun.W1ZzrE.mount: Deactivated successfully. Oct 5 05:05:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65309 DF PROTO=TCP SPT=47752 DPT=9882 SEQ=1652060374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC601D0000000001030307) Oct 5 05:05:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65310 DF PROTO=TCP SPT=47752 DPT=9882 SEQ=1652060374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECC6FDD0000000001030307) Oct 5 05:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:05:29 localhost podman[108496]: 2025-10-05 09:05:29.924432802 +0000 UTC m=+0.084670189 container health_status 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, distribution-scope=public, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 05:05:29 localhost podman[108496]: 2025-10-05 09:05:29.942025037 +0000 UTC m=+0.102262394 container exec_died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:05:29 localhost podman[108496]: unhealthy Oct 5 05:05:29 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:29 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:05:41 localhost podman[108325]: time="2025-10-05T09:05:41Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Oct 5 05:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:05:41 localhost systemd[1]: libpod-80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.scope: Deactivated successfully. Oct 5 05:05:41 localhost systemd[1]: libpod-80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.scope: Consumed 6.725s CPU time. Oct 5 05:05:41 localhost podman[108325]: 2025-10-05 09:05:41.950032422 +0000 UTC m=+42.118749536 container died 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git) Oct 5 05:05:41 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.timer: Deactivated successfully. Oct 5 05:05:41 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6. Oct 5 05:05:41 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed to open /run/systemd/transient/80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: No such file or directory Oct 5 05:05:41 localhost systemd[1]: tmp-crun.ejxEq8.mount: Deactivated successfully. Oct 5 05:05:42 localhost podman[108596]: 2025-10-05 09:05:42.053652903 +0000 UTC m=+0.100424275 container health_status 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=) Oct 5 05:05:42 localhost podman[108325]: 2025-10-05 09:05:42.069973564 +0000 UTC m=+42.238690688 container cleanup 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, release=1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible) Oct 5 05:05:42 localhost podman[108325]: ceilometer_agent_ipmi Oct 5 05:05:42 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.timer: Failed to open /run/systemd/transient/80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.timer: No such file or directory Oct 5 05:05:42 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed to open /run/systemd/transient/80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: No such file or directory Oct 5 05:05:42 localhost podman[108597]: 2025-10-05 09:05:42.100134648 +0000 UTC m=+0.141969857 container cleanup 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, version=17.1.9, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 05:05:42 localhost systemd[1]: libpod-conmon-80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.scope: Deactivated successfully. Oct 5 05:05:42 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.timer: Failed to open /run/systemd/transient/80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.timer: No such file or directory Oct 5 05:05:42 localhost systemd[1]: 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: Failed to open /run/systemd/transient/80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6.service: No such file or directory Oct 5 05:05:42 localhost podman[108638]: 2025-10-05 09:05:42.20046097 +0000 UTC m=+0.065150901 container cleanup 80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 05:05:42 localhost podman[108638]: ceilometer_agent_ipmi Oct 5 05:05:42 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Oct 5 05:05:42 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Oct 5 05:05:42 localhost podman[108596]: 2025-10-05 09:05:42.287844692 +0000 UTC m=+0.334616034 container exec_died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, container_name=metrics_qdr, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 05:05:42 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Deactivated successfully. Oct 5 05:05:42 localhost systemd[1]: var-lib-containers-storage-overlay-8379cddc00dbe5dcf7036da062404e87efadd4333dd5918c48ece607468bfa14-merged.mount: Deactivated successfully. Oct 5 05:05:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80f1d8e519040333c609757658555ee9adb6b2ef62ad9fd8b037b8dfedf66eb6-userdata-shm.mount: Deactivated successfully. Oct 5 05:05:43 localhost python3.9[108742]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:43 localhost systemd[1]: Reloading. Oct 5 05:05:43 localhost systemd-rc-local-generator[108764]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:05:43 localhost systemd-sysv-generator[108768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:05:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29293 DF PROTO=TCP SPT=38612 DPT=9102 SEQ=3944477319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCA9DE0000000001030307) Oct 5 05:05:43 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:05:43 localhost systemd[1]: Stopping collectd container... Oct 5 05:05:43 localhost recover_tripleo_nova_virtqemud[108783]: 62622 Oct 5 05:05:43 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:05:43 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:05:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29294 DF PROTO=TCP SPT=38612 DPT=9102 SEQ=3944477319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCADDD0000000001030307) Oct 5 05:05:45 localhost systemd[1]: libpod-0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.scope: Deactivated successfully. Oct 5 05:05:45 localhost systemd[1]: libpod-0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.scope: Consumed 2.246s CPU time. Oct 5 05:05:45 localhost podman[108784]: 2025-10-05 09:05:45.293155253 +0000 UTC m=+1.759520673 container died 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 05:05:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.timer: Deactivated successfully. Oct 5 05:05:45 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1. Oct 5 05:05:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Failed to open /run/systemd/transient/0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: No such file or directory Oct 5 05:05:45 localhost systemd[1]: tmp-crun.AVMDsu.mount: Deactivated successfully. Oct 5 05:05:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1-userdata-shm.mount: Deactivated successfully. Oct 5 05:05:45 localhost podman[108784]: 2025-10-05 09:05:45.353829133 +0000 UTC m=+1.820194503 container cleanup 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step3, tcib_managed=true, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, release=2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:05:45 localhost podman[108784]: collectd Oct 5 05:05:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.timer: Failed to open /run/systemd/transient/0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.timer: No such file or directory Oct 5 05:05:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Failed to open /run/systemd/transient/0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: No such file or directory Oct 5 05:05:45 localhost podman[108796]: 2025-10-05 09:05:45.392701564 +0000 UTC m=+0.092338317 container cleanup 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, com.redhat.component=openstack-collectd-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, container_name=collectd, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, batch=17.1_20250721.1) Oct 5 05:05:45 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:45 localhost systemd[1]: libpod-conmon-0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.scope: Deactivated successfully. Oct 5 05:05:45 localhost podman[108827]: error opening file `/run/crun/0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1/status`: No such file or directory Oct 5 05:05:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.timer: Failed to open /run/systemd/transient/0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.timer: No such file or directory Oct 5 05:05:45 localhost systemd[1]: 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: Failed to open /run/systemd/transient/0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1.service: No such file or directory Oct 5 05:05:45 localhost podman[108816]: 2025-10-05 09:05:45.513325234 +0000 UTC m=+0.082517002 container cleanup 0267e1128f1c3d34d6a19e93f4ababd0b26ae304ac3cb8714cb1a18a634992c1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, container_name=collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container) Oct 5 05:05:45 localhost podman[108816]: collectd Oct 5 05:05:45 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Oct 5 05:05:45 localhost systemd[1]: Stopped collectd container. Oct 5 05:05:46 localhost python3.9[108920]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:46 localhost systemd[1]: var-lib-containers-storage-overlay-39c1bb8c85867868083ee6bdbdf271740a2f1e31fdcf6b8f1d69303927ef66fa-merged.mount: Deactivated successfully. Oct 5 05:05:46 localhost systemd[1]: Reloading. Oct 5 05:05:46 localhost systemd-sysv-generator[108949]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:05:46 localhost systemd-rc-local-generator[108946]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29295 DF PROTO=TCP SPT=38612 DPT=9102 SEQ=3944477319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCB5DD0000000001030307) Oct 5 05:05:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:05:46 localhost systemd[1]: Stopping iscsid container... Oct 5 05:05:46 localhost systemd[1]: tmp-crun.EM8yxO.mount: Deactivated successfully. Oct 5 05:05:46 localhost systemd[1]: libpod-5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.scope: Deactivated successfully. Oct 5 05:05:46 localhost systemd[1]: libpod-5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.scope: Consumed 1.158s CPU time. Oct 5 05:05:46 localhost podman[108961]: 2025-10-05 09:05:46.754175759 +0000 UTC m=+0.086110638 container died 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, release=1, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:05:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.timer: Deactivated successfully. Oct 5 05:05:46 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb. Oct 5 05:05:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Failed to open /run/systemd/transient/5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: No such file or directory Oct 5 05:05:46 localhost podman[108961]: 2025-10-05 09:05:46.792565676 +0000 UTC m=+0.124500505 container cleanup 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:05:46 localhost podman[108961]: iscsid Oct 5 05:05:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.timer: Failed to open /run/systemd/transient/5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.timer: No such file or directory Oct 5 05:05:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Failed to open /run/systemd/transient/5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: No such file or directory Oct 5 05:05:46 localhost podman[108975]: 2025-10-05 09:05:46.830108621 +0000 UTC m=+0.062930982 container cleanup 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, release=1, io.openshift.expose-services=, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, version=17.1.9) Oct 5 05:05:46 localhost systemd[1]: libpod-conmon-5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.scope: Deactivated successfully. Oct 5 05:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50080 DF PROTO=TCP SPT=41842 DPT=9100 SEQ=1341606083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCB76C0000000001030307) Oct 5 05:05:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.timer: Failed to open /run/systemd/transient/5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.timer: No such file or directory Oct 5 05:05:46 localhost systemd[1]: 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: Failed to open /run/systemd/transient/5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb.service: No such file or directory Oct 5 05:05:46 localhost podman[108988]: 2025-10-05 09:05:46.929441165 +0000 UTC m=+0.068839050 container cleanup 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.openshift.expose-services=) Oct 5 05:05:46 localhost podman[108988]: iscsid Oct 5 05:05:46 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Oct 5 05:05:46 localhost systemd[1]: Stopped iscsid container. Oct 5 05:05:47 localhost systemd[1]: var-lib-containers-storage-overlay-7a9d8baf35bc0bfdd1af3c321e72fe98328bf9d350d48953a4ebb7cb925693bb-merged.mount: Deactivated successfully. Oct 5 05:05:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb-userdata-shm.mount: Deactivated successfully. Oct 5 05:05:47 localhost python3.9[109092]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50081 DF PROTO=TCP SPT=41842 DPT=9100 SEQ=1341606083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCBB5E0000000001030307) Oct 5 05:05:48 localhost systemd[1]: Reloading. Oct 5 05:05:48 localhost systemd-rc-local-generator[109117]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:05:48 localhost systemd-sysv-generator[109122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:05:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:05:49 localhost systemd[1]: Stopping logrotate_crond container... Oct 5 05:05:49 localhost systemd[1]: libpod-c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.scope: Deactivated successfully. Oct 5 05:05:49 localhost systemd[1]: libpod-c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.scope: Consumed 1.058s CPU time. Oct 5 05:05:49 localhost podman[109133]: 2025-10-05 09:05:49.174076619 +0000 UTC m=+0.081185455 container died c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-type=git, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12) Oct 5 05:05:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.timer: Deactivated successfully. Oct 5 05:05:49 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a. Oct 5 05:05:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Failed to open /run/systemd/transient/c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: No such file or directory Oct 5 05:05:49 localhost podman[109133]: 2025-10-05 09:05:49.285259684 +0000 UTC m=+0.192368520 container cleanup c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public) Oct 5 05:05:49 localhost podman[109133]: logrotate_crond Oct 5 05:05:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.timer: Failed to open /run/systemd/transient/c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.timer: No such file or directory Oct 5 05:05:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Failed to open /run/systemd/transient/c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: No such file or directory Oct 5 05:05:49 localhost podman[109147]: 2025-10-05 09:05:49.308512273 +0000 UTC m=+0.127162589 container cleanup c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, version=17.1.9) Oct 5 05:05:49 localhost systemd[1]: libpod-conmon-c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.scope: Deactivated successfully. Oct 5 05:05:49 localhost podman[109175]: error opening file `/run/crun/c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a/status`: No such file or directory Oct 5 05:05:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.timer: Failed to open /run/systemd/transient/c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.timer: No such file or directory Oct 5 05:05:49 localhost systemd[1]: c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: Failed to open /run/systemd/transient/c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a.service: No such file or directory Oct 5 05:05:49 localhost podman[109163]: 2025-10-05 09:05:49.412119723 +0000 UTC m=+0.075124852 container cleanup c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 05:05:49 localhost podman[109163]: logrotate_crond Oct 5 05:05:49 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Oct 5 05:05:49 localhost systemd[1]: Stopped logrotate_crond container. Oct 5 05:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50082 DF PROTO=TCP SPT=41842 DPT=9100 SEQ=1341606083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCC35D0000000001030307) Oct 5 05:05:50 localhost systemd[1]: tmp-crun.qv1u67.mount: Deactivated successfully. Oct 5 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay-af758892ab760708ec72004cbcaa49f310dccfa379de6a731502123c0acc8fbf-merged.mount: Deactivated successfully. Oct 5 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4bd37d3f949388f5ea210992f2dad34ddf2584b85dea1f5ea82da0a4c549a7a-userdata-shm.mount: Deactivated successfully. Oct 5 05:05:50 localhost python3.9[109268]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:50 localhost systemd[1]: Reloading. Oct 5 05:05:50 localhost systemd-rc-local-generator[109292]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:05:50 localhost systemd-sysv-generator[109297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:05:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:05:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29296 DF PROTO=TCP SPT=38612 DPT=9102 SEQ=3944477319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCC59D0000000001030307) Oct 5 05:05:50 localhost systemd[1]: Stopping metrics_qdr container... Oct 5 05:05:50 localhost systemd[1]: tmp-crun.jykpML.mount: Deactivated successfully. Oct 5 05:05:50 localhost kernel: qdrouterd[55196]: segfault at 0 ip 00007f1fab2287cb sp 00007ffe206a56c0 error 4 in libc.so.6[7f1fab1c5000+175000] Oct 5 05:05:50 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Oct 5 05:05:50 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Oct 5 05:05:50 localhost systemd[1]: Started Process Core Dump (PID 109320/UID 0). Oct 5 05:05:50 localhost systemd-coredump[109321]: Resource limits disable core dumping for process 55196 (qdrouterd). Oct 5 05:05:50 localhost systemd-coredump[109321]: Process 55196 (qdrouterd) of user 42465 dumped core. Oct 5 05:05:50 localhost systemd[1]: systemd-coredump@0-109320-0.service: Deactivated successfully. Oct 5 05:05:50 localhost podman[109308]: 2025-10-05 09:05:50.894624159 +0000 UTC m=+0.247969584 container died 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.33.12, distribution-scope=public, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 5 05:05:50 localhost systemd[1]: libpod-951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.scope: Deactivated successfully. Oct 5 05:05:50 localhost systemd[1]: libpod-951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.scope: Consumed 30.236s CPU time. Oct 5 05:05:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.timer: Deactivated successfully. Oct 5 05:05:50 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569. Oct 5 05:05:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Failed to open /run/systemd/transient/951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: No such file or directory Oct 5 05:05:50 localhost podman[109308]: 2025-10-05 09:05:50.948021442 +0000 UTC m=+0.301366857 container cleanup 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, architecture=x86_64, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:05:50 localhost podman[109308]: metrics_qdr Oct 5 05:05:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.timer: Failed to open /run/systemd/transient/951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.timer: No such file or directory Oct 5 05:05:50 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Failed to open /run/systemd/transient/951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: No such file or directory Oct 5 05:05:50 localhost podman[109325]: 2025-10-05 09:05:50.993612294 +0000 UTC m=+0.085869413 container cleanup 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, container_name=metrics_qdr, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true) Oct 5 05:05:51 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Oct 5 05:05:51 localhost systemd[1]: libpod-conmon-951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.scope: Deactivated successfully. Oct 5 05:05:51 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.timer: Failed to open /run/systemd/transient/951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.timer: No such file or directory Oct 5 05:05:51 localhost systemd[1]: 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: Failed to open /run/systemd/transient/951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569.service: No such file or directory Oct 5 05:05:51 localhost podman[109340]: 2025-10-05 09:05:51.092317381 +0000 UTC m=+0.069796787 container cleanup 951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1d5d37a9592a9b8a8e98d61f24e93486'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:05:51 localhost podman[109340]: metrics_qdr Oct 5 05:05:51 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Oct 5 05:05:51 localhost systemd[1]: Stopped metrics_qdr container. Oct 5 05:05:51 localhost systemd[1]: var-lib-containers-storage-overlay-4e34842e7b4c004f2994d43372776e245b87a7ff67070b72cbc86e95e6c79b83-merged.mount: Deactivated successfully. Oct 5 05:05:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-951500b961e6ce330ea2770bc5e130a0d07f20d1967dd150458629d4e6247569-userdata-shm.mount: Deactivated successfully. Oct 5 05:05:51 localhost python3.9[109445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48834 DF PROTO=TCP SPT=35020 DPT=9105 SEQ=2103630941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCCB9F0000000001030307) Oct 5 05:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:05:52 localhost python3.9[109538]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:52 localhost systemd[1]: tmp-crun.Iy8yu4.mount: Deactivated successfully. Oct 5 05:05:52 localhost podman[109540]: 2025-10-05 09:05:52.70813485 +0000 UTC m=+0.102265454 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent) Oct 5 05:05:52 localhost podman[109539]: 2025-10-05 09:05:52.715930821 +0000 UTC m=+0.114503036 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44) Oct 5 05:05:52 localhost podman[109540]: 2025-10-05 09:05:52.726903738 +0000 UTC m=+0.121034382 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Oct 5 05:05:52 localhost podman[109540]: unhealthy Oct 5 05:05:52 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:52 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:05:52 localhost podman[109539]: 2025-10-05 09:05:52.755787408 +0000 UTC m=+0.154359633 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 05:05:52 localhost podman[109539]: unhealthy Oct 5 05:05:52 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:52 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:05:53 localhost podman[109670]: 2025-10-05 09:05:53.176770195 +0000 UTC m=+0.082580652 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 05:05:53 localhost python3.9[109671]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:53 localhost podman[109670]: 2025-10-05 09:05:53.557750731 +0000 UTC m=+0.463561178 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, version=17.1.9, container_name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:05:53 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:05:54 localhost python3.9[109787]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41202 DF PROTO=TCP SPT=37472 DPT=9882 SEQ=2579757588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCD55D0000000001030307) Oct 5 05:05:55 localhost systemd[1]: Reloading. Oct 5 05:05:55 localhost systemd-sysv-generator[109818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:05:55 localhost systemd-rc-local-generator[109815]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:05:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:05:55 localhost systemd[1]: Stopping nova_compute container... Oct 5 05:05:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41203 DF PROTO=TCP SPT=37472 DPT=9882 SEQ=2579757588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECCE51D0000000001030307) Oct 5 05:06:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:06:00 localhost podman[109840]: Error: container 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 is not running Oct 5 05:06:00 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=125/n/a Oct 5 05:06:00 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24495 DF PROTO=TCP SPT=45750 DPT=9102 SEQ=3922246238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD1F0E0000000001030307) Oct 5 05:06:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24496 DF PROTO=TCP SPT=45750 DPT=9102 SEQ=3922246238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD231D0000000001030307) Oct 5 05:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24497 DF PROTO=TCP SPT=45750 DPT=9102 SEQ=3922246238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD2B1D0000000001030307) Oct 5 05:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28196 DF PROTO=TCP SPT=41384 DPT=9100 SEQ=1133637772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD2C9D0000000001030307) Oct 5 05:06:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28197 DF PROTO=TCP SPT=41384 DPT=9100 SEQ=1133637772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD309D0000000001030307) Oct 5 05:06:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28198 DF PROTO=TCP SPT=41384 DPT=9100 SEQ=1133637772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD389D0000000001030307) Oct 5 05:06:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24498 DF PROTO=TCP SPT=45750 DPT=9102 SEQ=3922246238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD3ADE0000000001030307) Oct 5 05:06:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60026 DF PROTO=TCP SPT=60360 DPT=9105 SEQ=2778027666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD40CF0000000001030307) Oct 5 05:06:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:06:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:06:23 localhost systemd[1]: tmp-crun.syo6WK.mount: Deactivated successfully. Oct 5 05:06:23 localhost podman[109854]: 2025-10-05 09:06:23.158334729 +0000 UTC m=+0.070114736 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., vcs-type=git) Oct 5 05:06:23 localhost podman[109854]: 2025-10-05 09:06:23.206343357 +0000 UTC m=+0.118123364 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.expose-services=) Oct 5 05:06:23 localhost podman[109854]: unhealthy Oct 5 05:06:23 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:23 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:06:23 localhost podman[109853]: 2025-10-05 09:06:23.209267516 +0000 UTC m=+0.122907163 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, container_name=ovn_controller, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible) Oct 5 05:06:23 localhost podman[109853]: 2025-10-05 09:06:23.292815594 +0000 UTC m=+0.206455171 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 05:06:23 localhost podman[109853]: unhealthy Oct 5 05:06:23 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:23 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:06:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18337 DF PROTO=TCP SPT=50874 DPT=9882 SEQ=1340886060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD4A9D0000000001030307) Oct 5 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:06:24 localhost podman[109893]: 2025-10-05 09:06:24.671566016 +0000 UTC m=+0.083734534 container health_status 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37) Oct 5 05:06:25 localhost podman[109893]: 2025-10-05 09:06:25.067938118 +0000 UTC m=+0.480106626 container exec_died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git) Oct 5 05:06:25 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Deactivated successfully. Oct 5 05:06:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18338 DF PROTO=TCP SPT=50874 DPT=9882 SEQ=1340886060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD5A5D0000000001030307) Oct 5 05:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:06:30 localhost podman[109917]: Error: container 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 is not running Oct 5 05:06:30 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Main process exited, code=exited, status=125/n/a Oct 5 05:06:30 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed with result 'exit-code'. Oct 5 05:06:37 localhost podman[109828]: time="2025-10-05T09:06:37Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Oct 5 05:06:37 localhost systemd[1]: session-c11.scope: Deactivated successfully. Oct 5 05:06:37 localhost systemd[1]: session-c11.scope: Consumed 1.031s CPU time. Oct 5 05:06:37 localhost systemd[1]: libpod-5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.scope: Deactivated successfully. Oct 5 05:06:37 localhost systemd[1]: libpod-5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.scope: Consumed 38.731s CPU time. Oct 5 05:06:37 localhost podman[109828]: 2025-10-05 09:06:37.707539411 +0000 UTC m=+42.102860552 container stop 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute) Oct 5 05:06:37 localhost podman[109828]: 2025-10-05 09:06:37.744621454 +0000 UTC m=+42.139942545 container died 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible) Oct 5 05:06:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.timer: Deactivated successfully. Oct 5 05:06:37 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424. Oct 5 05:06:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed to open /run/systemd/transient/5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: No such file or directory Oct 5 05:06:37 localhost systemd[1]: var-lib-containers-storage-overlay-796699655caeb5f0994b29dba9c776a53d79d4d4bd0c45c0951fd1d3486e626e-merged.mount: Deactivated successfully. Oct 5 05:06:37 localhost podman[109828]: 2025-10-05 09:06:37.786483435 +0000 UTC m=+42.181804536 container cleanup 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:06:37 localhost podman[109828]: nova_compute Oct 5 05:06:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.timer: Failed to open /run/systemd/transient/5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.timer: No such file or directory Oct 5 05:06:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed to open /run/systemd/transient/5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: No such file or directory Oct 5 05:06:37 localhost podman[109993]: 2025-10-05 09:06:37.794692226 +0000 UTC m=+0.067094223 container cleanup 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T14:48:37, vcs-type=git, maintainer=OpenStack TripleO Team, release=1) Oct 5 05:06:37 localhost systemd[1]: libpod-conmon-5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.scope: Deactivated successfully. Oct 5 05:06:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.timer: Failed to open /run/systemd/transient/5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.timer: No such file or directory Oct 5 05:06:37 localhost systemd[1]: 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: Failed to open /run/systemd/transient/5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424.service: No such file or directory Oct 5 05:06:37 localhost podman[110006]: 2025-10-05 09:06:37.880760433 +0000 UTC m=+0.065691916 container cleanup 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.expose-services=) Oct 5 05:06:37 localhost podman[110006]: nova_compute Oct 5 05:06:37 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Oct 5 05:06:37 localhost systemd[1]: Stopped nova_compute container. Oct 5 05:06:37 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.161s CPU time, no IO. Oct 5 05:06:38 localhost python3.9[110109]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:38 localhost systemd[1]: Reloading. Oct 5 05:06:38 localhost systemd-rc-local-generator[110133]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:06:38 localhost systemd-sysv-generator[110139]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:06:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:06:39 localhost systemd[1]: Stopping nova_migration_target container... Oct 5 05:06:39 localhost systemd[1]: libpod-789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.scope: Deactivated successfully. Oct 5 05:06:39 localhost systemd[1]: libpod-789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.scope: Consumed 34.927s CPU time. Oct 5 05:06:39 localhost podman[110149]: 2025-10-05 09:06:39.232116105 +0000 UTC m=+0.077615739 container died 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:06:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.timer: Deactivated successfully. Oct 5 05:06:39 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae. Oct 5 05:06:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Failed to open /run/systemd/transient/789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: No such file or directory Oct 5 05:06:39 localhost systemd[1]: tmp-crun.o61Kma.mount: Deactivated successfully. Oct 5 05:06:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae-userdata-shm.mount: Deactivated successfully. Oct 5 05:06:39 localhost podman[110149]: 2025-10-05 09:06:39.290478662 +0000 UTC m=+0.135978296 container cleanup 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4) Oct 5 05:06:39 localhost podman[110149]: nova_migration_target Oct 5 05:06:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.timer: Failed to open /run/systemd/transient/789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.timer: No such file or directory Oct 5 05:06:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Failed to open /run/systemd/transient/789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: No such file or directory Oct 5 05:06:39 localhost podman[110164]: 2025-10-05 09:06:39.319428515 +0000 UTC m=+0.074732151 container cleanup 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, batch=17.1_20250721.1) Oct 5 05:06:39 localhost systemd[1]: libpod-conmon-789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.scope: Deactivated successfully. Oct 5 05:06:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.timer: Failed to open /run/systemd/transient/789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.timer: No such file or directory Oct 5 05:06:39 localhost systemd[1]: 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: Failed to open /run/systemd/transient/789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae.service: No such file or directory Oct 5 05:06:39 localhost podman[110176]: 2025-10-05 09:06:39.414905395 +0000 UTC m=+0.061641947 container cleanup 789f0e021bfca79eef5a921d6ee051aa19eb9feaed23c1cc39cc57b13dc8a4ae (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9) Oct 5 05:06:39 localhost podman[110176]: nova_migration_target Oct 5 05:06:39 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Oct 5 05:06:39 localhost systemd[1]: Stopped nova_migration_target container. Oct 5 05:06:40 localhost python3.9[110296]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:40 localhost systemd[1]: var-lib-containers-storage-overlay-0de907bff4ad3d3e1c7b6c9c4a1859902dc0671a32f334119ad23f86dab0e0a0-merged.mount: Deactivated successfully. Oct 5 05:06:40 localhost systemd[1]: Reloading. Oct 5 05:06:40 localhost systemd-sysv-generator[110329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:06:40 localhost systemd-rc-local-generator[110325]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:06:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:06:40 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Oct 5 05:06:40 localhost systemd[1]: libpod-918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d.scope: Deactivated successfully. Oct 5 05:06:40 localhost podman[110336]: 2025-10-05 09:06:40.681166387 +0000 UTC m=+0.062212832 container died 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_virtlogd_wrapper, build-date=2025-07-21T14:56:59, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:06:40 localhost systemd[1]: tmp-crun.02m22v.mount: Deactivated successfully. Oct 5 05:06:40 localhost podman[110336]: 2025-10-05 09:06:40.731517308 +0000 UTC m=+0.112563743 container cleanup 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, container_name=nova_virtlogd_wrapper, release=2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 05:06:40 localhost podman[110336]: nova_virtlogd_wrapper Oct 5 05:06:40 localhost podman[110349]: 2025-10-05 09:06:40.780303726 +0000 UTC m=+0.086638503 container cleanup 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-07-21T14:56:59, container_name=nova_virtlogd_wrapper, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 5 05:06:41 localhost systemd[1]: var-lib-containers-storage-overlay-b1d68cb7392e246942b2a3b582c63dcc04c4b9d2fce93b251d2f59828e400c38-merged.mount: Deactivated successfully. Oct 5 05:06:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d-userdata-shm.mount: Deactivated successfully. Oct 5 05:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19493 DF PROTO=TCP SPT=52040 DPT=9102 SEQ=2661402502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD943E0000000001030307) Oct 5 05:06:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19494 DF PROTO=TCP SPT=52040 DPT=9102 SEQ=2661402502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECD985D0000000001030307) Oct 5 05:06:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19495 DF PROTO=TCP SPT=52040 DPT=9102 SEQ=2661402502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDA05E0000000001030307) Oct 5 05:06:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15001 DF PROTO=TCP SPT=35762 DPT=9100 SEQ=298949301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDA1CD0000000001030307) Oct 5 05:06:47 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:06:47 localhost systemd[84940]: Activating special unit Exit the Session... Oct 5 05:06:47 localhost systemd[84940]: Removed slice User Background Tasks Slice. Oct 5 05:06:47 localhost systemd[84940]: Stopped target Main User Target. Oct 5 05:06:47 localhost systemd[84940]: Stopped target Basic System. Oct 5 05:06:47 localhost systemd[84940]: Stopped target Paths. Oct 5 05:06:47 localhost systemd[84940]: Stopped target Sockets. Oct 5 05:06:47 localhost systemd[84940]: Stopped target Timers. Oct 5 05:06:47 localhost systemd[84940]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:06:47 localhost systemd[84940]: Closed D-Bus User Message Bus Socket. Oct 5 05:06:47 localhost systemd[84940]: Stopped Create User's Volatile Files and Directories. Oct 5 05:06:47 localhost systemd[84940]: Removed slice User Application Slice. Oct 5 05:06:47 localhost systemd[84940]: Reached target Shutdown. Oct 5 05:06:47 localhost systemd[84940]: Finished Exit the Session. Oct 5 05:06:47 localhost systemd[84940]: Reached target Exit the Session. Oct 5 05:06:47 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:06:47 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:06:47 localhost systemd[1]: user@0.service: Consumed 2.927s CPU time, no IO. Oct 5 05:06:47 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:06:47 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:06:47 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:06:47 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:06:47 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:06:47 localhost systemd[1]: user-0.slice: Consumed 3.986s CPU time. Oct 5 05:06:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15002 DF PROTO=TCP SPT=35762 DPT=9100 SEQ=298949301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDA5DD0000000001030307) Oct 5 05:06:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15003 DF PROTO=TCP SPT=35762 DPT=9100 SEQ=298949301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDADDD0000000001030307) Oct 5 05:06:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19496 DF PROTO=TCP SPT=52040 DPT=9102 SEQ=2661402502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDB01D0000000001030307) Oct 5 05:06:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41549 DF PROTO=TCP SPT=32908 DPT=9105 SEQ=3837004142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDB5FF0000000001030307) Oct 5 05:06:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:06:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:06:53 localhost podman[110369]: 2025-10-05 09:06:53.698866854 +0000 UTC m=+0.103182290 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 05:06:53 localhost podman[110369]: 2025-10-05 09:06:53.716711836 +0000 UTC m=+0.121027242 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53) Oct 5 05:06:53 localhost podman[110369]: unhealthy Oct 5 05:06:53 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:53 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:06:53 localhost podman[110368]: 2025-10-05 09:06:53.672185622 +0000 UTC m=+0.083946410 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 05:06:53 localhost podman[110368]: 2025-10-05 09:06:53.806867102 +0000 UTC m=+0.218627850 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:06:53 localhost podman[110368]: unhealthy Oct 5 05:06:53 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:53 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:06:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9638 DF PROTO=TCP SPT=51464 DPT=9882 SEQ=481200030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDBFDD0000000001030307) Oct 5 05:06:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9639 DF PROTO=TCP SPT=51464 DPT=9882 SEQ=481200030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECDCF9D0000000001030307) Oct 5 05:07:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28303 DF PROTO=TCP SPT=51780 DPT=9102 SEQ=1601797991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE096E0000000001030307) Oct 5 05:07:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28304 DF PROTO=TCP SPT=51780 DPT=9102 SEQ=1601797991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE0D5D0000000001030307) Oct 5 05:07:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28305 DF PROTO=TCP SPT=51780 DPT=9102 SEQ=1601797991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE155E0000000001030307) Oct 5 05:07:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1584 DF PROTO=TCP SPT=46198 DPT=9100 SEQ=4032731051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE16FD0000000001030307) Oct 5 05:07:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1585 DF PROTO=TCP SPT=46198 DPT=9100 SEQ=4032731051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE1B1E0000000001030307) Oct 5 05:07:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1586 DF PROTO=TCP SPT=46198 DPT=9100 SEQ=4032731051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE231D0000000001030307) Oct 5 05:07:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28306 DF PROTO=TCP SPT=51780 DPT=9102 SEQ=1601797991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE251D0000000001030307) Oct 5 05:07:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25985 DF PROTO=TCP SPT=54564 DPT=9105 SEQ=1535077898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE2B2F0000000001030307) Oct 5 05:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:07:23 localhost podman[110410]: 2025-10-05 09:07:23.929010879 +0000 UTC m=+0.081090393 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1) Oct 5 05:07:23 localhost podman[110410]: 2025-10-05 09:07:23.945784352 +0000 UTC m=+0.097863896 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, release=1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vcs-type=git) Oct 5 05:07:23 localhost podman[110410]: unhealthy Oct 5 05:07:23 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:23 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:07:24 localhost systemd[1]: tmp-crun.lZKOJf.mount: Deactivated successfully. Oct 5 05:07:24 localhost podman[110411]: 2025-10-05 09:07:24.047009478 +0000 UTC m=+0.194738405 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Oct 5 05:07:24 localhost podman[110411]: 2025-10-05 09:07:24.062091165 +0000 UTC m=+0.209820102 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:07:24 localhost podman[110411]: unhealthy Oct 5 05:07:24 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:24 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:07:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31775 DF PROTO=TCP SPT=40956 DPT=9882 SEQ=1126960623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE34DD0000000001030307) Oct 5 05:07:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31776 DF PROTO=TCP SPT=40956 DPT=9882 SEQ=1126960623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE449D0000000001030307) Oct 5 05:07:35 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:07:35 localhost recover_tripleo_nova_virtqemud[110452]: 62622 Oct 5 05:07:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:07:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43673 DF PROTO=TCP SPT=56098 DPT=9102 SEQ=4294182477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE7E9D0000000001030307) Oct 5 05:07:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43674 DF PROTO=TCP SPT=56098 DPT=9102 SEQ=4294182477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE829D0000000001030307) Oct 5 05:07:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43675 DF PROTO=TCP SPT=56098 DPT=9102 SEQ=4294182477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE8A9D0000000001030307) Oct 5 05:07:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13292 DF PROTO=TCP SPT=52318 DPT=9100 SEQ=1326920937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE8C2D0000000001030307) Oct 5 05:07:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13293 DF PROTO=TCP SPT=52318 DPT=9100 SEQ=1326920937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE901E0000000001030307) Oct 5 05:07:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13294 DF PROTO=TCP SPT=52318 DPT=9100 SEQ=1326920937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE981E0000000001030307) Oct 5 05:07:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43676 DF PROTO=TCP SPT=56098 DPT=9102 SEQ=4294182477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECE9A5D0000000001030307) Oct 5 05:07:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40169 DF PROTO=TCP SPT=49918 DPT=9105 SEQ=1802781741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECEA05E0000000001030307) Oct 5 05:07:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:07:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:07:54 localhost podman[110581]: 2025-10-05 09:07:54.190218816 +0000 UTC m=+0.090382833 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_controller, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:07:54 localhost podman[110581]: 2025-10-05 09:07:54.205470979 +0000 UTC m=+0.105635006 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, build-date=2025-07-21T13:28:44, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:07:54 localhost podman[110581]: unhealthy Oct 5 05:07:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:54 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:07:54 localhost podman[110582]: 2025-10-05 09:07:54.293476657 +0000 UTC m=+0.189941164 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9) Oct 5 05:07:54 localhost podman[110582]: 2025-10-05 09:07:54.312114671 +0000 UTC m=+0.208579218 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, release=1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4) Oct 5 05:07:54 localhost podman[110582]: unhealthy Oct 5 05:07:54 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:54 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:07:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51809 DF PROTO=TCP SPT=44444 DPT=9882 SEQ=2784695451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECEAA1D0000000001030307) Oct 5 05:07:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51810 DF PROTO=TCP SPT=44444 DPT=9882 SEQ=2784695451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECEB9DD0000000001030307) Oct 5 05:08:04 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Oct 5 05:08:04 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61870 (conmon) with signal SIGKILL. Oct 5 05:08:04 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Oct 5 05:08:04 localhost systemd[1]: libpod-conmon-918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d.scope: Deactivated successfully. Oct 5 05:08:04 localhost podman[110632]: error opening file `/run/crun/918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d/status`: No such file or directory Oct 5 05:08:04 localhost podman[110621]: 2025-10-05 09:08:04.912080938 +0000 UTC m=+0.073423325 container cleanup 918684aca3031e7f1e394dffb320e311b28dadbbb5f5a8ad8ff193b8ab2c440d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, container_name=nova_virtlogd_wrapper, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:08:04 localhost podman[110621]: nova_virtlogd_wrapper Oct 5 05:08:04 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Oct 5 05:08:04 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Oct 5 05:08:05 localhost python3.9[110725]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:05 localhost systemd[1]: Reloading. Oct 5 05:08:05 localhost systemd-rc-local-generator[110755]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:05 localhost systemd-sysv-generator[110758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:06 localhost systemd[1]: Stopping nova_virtnodedevd container... Oct 5 05:08:06 localhost systemd[1]: libpod-21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d.scope: Deactivated successfully. Oct 5 05:08:06 localhost systemd[1]: libpod-21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d.scope: Consumed 1.687s CPU time. Oct 5 05:08:06 localhost podman[110766]: 2025-10-05 09:08:06.176023458 +0000 UTC m=+0.085277516 container died 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Oct 5 05:08:06 localhost systemd[1]: tmp-crun.XN4orr.mount: Deactivated successfully. Oct 5 05:08:06 localhost podman[110766]: 2025-10-05 09:08:06.225515605 +0000 UTC m=+0.134769613 container cleanup 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9) Oct 5 05:08:06 localhost podman[110766]: nova_virtnodedevd Oct 5 05:08:06 localhost podman[110781]: 2025-10-05 09:08:06.301755096 +0000 UTC m=+0.111415822 container cleanup 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, container_name=nova_virtnodedevd, build-date=2025-07-21T14:56:59, tcib_managed=true, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 05:08:06 localhost systemd[1]: libpod-conmon-21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d.scope: Deactivated successfully. Oct 5 05:08:06 localhost podman[110809]: error opening file `/run/crun/21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d/status`: No such file or directory Oct 5 05:08:06 localhost podman[110797]: 2025-10-05 09:08:06.401649045 +0000 UTC m=+0.069692554 container cleanup 21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtnodedevd, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 5 05:08:06 localhost podman[110797]: nova_virtnodedevd Oct 5 05:08:06 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Oct 5 05:08:06 localhost systemd[1]: Stopped nova_virtnodedevd container. Oct 5 05:08:07 localhost systemd[1]: var-lib-containers-storage-overlay-b3ead5c1e8c394ab02ae8a983cb1adadda2588f7784aa316dd26a72cee1045f5-merged.mount: Deactivated successfully. Oct 5 05:08:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21fc8a8c30b295509c7f667c7f846a7adbbf1cd931f55abf079f96157c9cb55d-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:07 localhost python3.9[110903]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:07 localhost systemd[1]: Reloading. Oct 5 05:08:07 localhost systemd-rc-local-generator[110926]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:07 localhost systemd-sysv-generator[110933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:07 localhost systemd[1]: Stopping nova_virtproxyd container... Oct 5 05:08:07 localhost systemd[1]: libpod-8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6.scope: Deactivated successfully. Oct 5 05:08:07 localhost podman[110943]: 2025-10-05 09:08:07.675083672 +0000 UTC m=+0.077734952 container died 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:56:59, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container) Oct 5 05:08:07 localhost podman[110943]: 2025-10-05 09:08:07.717352454 +0000 UTC m=+0.120003684 container cleanup 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step3, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtproxyd) Oct 5 05:08:07 localhost podman[110943]: nova_virtproxyd Oct 5 05:08:07 localhost podman[110957]: 2025-10-05 09:08:07.758430774 +0000 UTC m=+0.070125107 container cleanup 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=nova_virtproxyd, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2) Oct 5 05:08:07 localhost systemd[1]: libpod-conmon-8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6.scope: Deactivated successfully. Oct 5 05:08:07 localhost podman[110985]: error opening file `/run/crun/8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6/status`: No such file or directory Oct 5 05:08:07 localhost podman[110972]: 2025-10-05 09:08:07.865052756 +0000 UTC m=+0.069139560 container cleanup 8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, release=2) Oct 5 05:08:07 localhost podman[110972]: nova_virtproxyd Oct 5 05:08:07 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Oct 5 05:08:07 localhost systemd[1]: Stopped nova_virtproxyd container. Oct 5 05:08:08 localhost systemd[1]: var-lib-containers-storage-overlay-badeb4130f9d8cc5090bd2dca4bd725b665e85b1961f91b70a567215f9d62ee4-merged.mount: Deactivated successfully. Oct 5 05:08:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8379c788633b85907cec56396aab6e4e8bb7f79b1f9e2e5169c0d37c1b3364f6-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:08 localhost python3.9[111078]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:08 localhost systemd[1]: Reloading. Oct 5 05:08:08 localhost systemd-sysv-generator[111107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:08 localhost systemd-rc-local-generator[111102]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Oct 5 05:08:09 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Oct 5 05:08:09 localhost systemd[1]: Stopping nova_virtqemud container... Oct 5 05:08:09 localhost systemd[1]: tmp-crun.MOUBH7.mount: Deactivated successfully. Oct 5 05:08:09 localhost systemd[1]: libpod-0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56.scope: Deactivated successfully. Oct 5 05:08:09 localhost systemd[1]: libpod-0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56.scope: Consumed 3.288s CPU time. Oct 5 05:08:09 localhost podman[111118]: 2025-10-05 09:08:09.1269794 +0000 UTC m=+0.080253700 container died 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, container_name=nova_virtqemud, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:08:09 localhost podman[111118]: 2025-10-05 09:08:09.147551336 +0000 UTC m=+0.100825616 container cleanup 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, distribution-scope=public, release=2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.expose-services=) Oct 5 05:08:09 localhost podman[111118]: nova_virtqemud Oct 5 05:08:09 localhost systemd[1]: tmp-crun.VRH5ty.mount: Deactivated successfully. Oct 5 05:08:09 localhost systemd[1]: var-lib-containers-storage-overlay-f26ca429e8dedc8286a762f367a34d49217d302698ad26e3b362ce469133d4f4-merged.mount: Deactivated successfully. Oct 5 05:08:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:09 localhost podman[111133]: 2025-10-05 09:08:09.18358676 +0000 UTC m=+0.044664848 container cleanup 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtqemud, io.buildah.version=1.33.12, config_id=tripleo_step3, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible) Oct 5 05:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47046 DF PROTO=TCP SPT=47924 DPT=9102 SEQ=4090957980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECEF3D00000000001030307) Oct 5 05:08:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47047 DF PROTO=TCP SPT=47924 DPT=9102 SEQ=4090957980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECEF7DD0000000001030307) Oct 5 05:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47048 DF PROTO=TCP SPT=47924 DPT=9102 SEQ=4090957980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECEFFDD0000000001030307) Oct 5 05:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51777 DF PROTO=TCP SPT=60516 DPT=9100 SEQ=914607545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF015D0000000001030307) Oct 5 05:08:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51778 DF PROTO=TCP SPT=60516 DPT=9100 SEQ=914607545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF055D0000000001030307) Oct 5 05:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51779 DF PROTO=TCP SPT=60516 DPT=9100 SEQ=914607545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF0D5E0000000001030307) Oct 5 05:08:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47049 DF PROTO=TCP SPT=47924 DPT=9102 SEQ=4090957980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF0F9D0000000001030307) Oct 5 05:08:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49621 DF PROTO=TCP SPT=35422 DPT=9105 SEQ=908841352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF15900000000001030307) Oct 5 05:08:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55572 DF PROTO=TCP SPT=41528 DPT=9882 SEQ=636458110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF1F5D0000000001030307) Oct 5 05:08:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:08:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:08:24 localhost podman[111147]: 2025-10-05 09:08:24.668922456 +0000 UTC m=+0.078952745 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Oct 5 05:08:24 localhost podman[111147]: 2025-10-05 09:08:24.683097719 +0000 UTC m=+0.093128008 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=ovn_controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible) Oct 5 05:08:24 localhost podman[111147]: unhealthy Oct 5 05:08:24 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:24 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:08:24 localhost podman[111148]: 2025-10-05 09:08:24.734633771 +0000 UTC m=+0.138545634 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:08:24 localhost podman[111148]: 2025-10-05 09:08:24.750925722 +0000 UTC m=+0.154837565 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1) Oct 5 05:08:24 localhost podman[111148]: unhealthy Oct 5 05:08:24 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:24 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:08:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55573 DF PROTO=TCP SPT=41528 DPT=9882 SEQ=636458110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF2F1D0000000001030307) Oct 5 05:08:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48472 DF PROTO=TCP SPT=34534 DPT=9102 SEQ=3992936460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF68FD0000000001030307) Oct 5 05:08:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48473 DF PROTO=TCP SPT=34534 DPT=9102 SEQ=3992936460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF6D1D0000000001030307) Oct 5 05:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48474 DF PROTO=TCP SPT=34534 DPT=9102 SEQ=3992936460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF751E0000000001030307) Oct 5 05:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64526 DF PROTO=TCP SPT=42726 DPT=9100 SEQ=2564661022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF768D0000000001030307) Oct 5 05:08:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64527 DF PROTO=TCP SPT=42726 DPT=9100 SEQ=2564661022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF7A9D0000000001030307) Oct 5 05:08:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64528 DF PROTO=TCP SPT=42726 DPT=9100 SEQ=2564661022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF829D0000000001030307) Oct 5 05:08:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48475 DF PROTO=TCP SPT=34534 DPT=9102 SEQ=3992936460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF84DE0000000001030307) Oct 5 05:08:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14758 DF PROTO=TCP SPT=56862 DPT=9105 SEQ=1539294787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF8ABF0000000001030307) Oct 5 05:08:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63201 DF PROTO=TCP SPT=38258 DPT=9882 SEQ=2042635719 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECF949D0000000001030307) Oct 5 05:08:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:08:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:08:54 localhost podman[111263]: 2025-10-05 09:08:54.936437533 +0000 UTC m=+0.088866402 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.9, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 05:08:54 localhost podman[111264]: 2025-10-05 09:08:54.98587432 +0000 UTC m=+0.138430192 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 05:08:55 localhost podman[111264]: 2025-10-05 09:08:55.002122969 +0000 UTC m=+0.154678831 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:08:55 localhost podman[111264]: unhealthy Oct 5 05:08:55 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:55 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:08:55 localhost podman[111263]: 2025-10-05 09:08:55.053525608 +0000 UTC m=+0.205954497 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller) Oct 5 05:08:55 localhost podman[111263]: unhealthy Oct 5 05:08:55 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:55 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:08:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63202 DF PROTO=TCP SPT=38258 DPT=9882 SEQ=2042635719 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFA45D0000000001030307) Oct 5 05:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18867 DF PROTO=TCP SPT=40288 DPT=9102 SEQ=2419785741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFDE2D0000000001030307) Oct 5 05:09:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18868 DF PROTO=TCP SPT=40288 DPT=9102 SEQ=2419785741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFE21D0000000001030307) Oct 5 05:09:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18869 DF PROTO=TCP SPT=40288 DPT=9102 SEQ=2419785741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFEA1D0000000001030307) Oct 5 05:09:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18610 DF PROTO=TCP SPT=58330 DPT=9100 SEQ=1407723764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFEBBD0000000001030307) Oct 5 05:09:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18611 DF PROTO=TCP SPT=58330 DPT=9100 SEQ=1407723764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFEFDD0000000001030307) Oct 5 05:09:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18612 DF PROTO=TCP SPT=58330 DPT=9100 SEQ=1407723764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFF7DD0000000001030307) Oct 5 05:09:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18870 DF PROTO=TCP SPT=40288 DPT=9102 SEQ=2419785741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFF9DD0000000001030307) Oct 5 05:09:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24998 DF PROTO=TCP SPT=51260 DPT=9105 SEQ=1211940757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABECFFFEF0000000001030307) Oct 5 05:09:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34793 DF PROTO=TCP SPT=45454 DPT=9882 SEQ=2531292387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0099D0000000001030307) Oct 5 05:09:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:09:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:09:25 localhost podman[111305]: 2025-10-05 09:09:25.174913828 +0000 UTC m=+0.085297546 container health_status 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Oct 5 05:09:25 localhost podman[111305]: 2025-10-05 09:09:25.218827905 +0000 UTC m=+0.129211633 container exec_died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Oct 5 05:09:25 localhost podman[111305]: unhealthy Oct 5 05:09:25 localhost podman[111306]: 2025-10-05 09:09:25.227710315 +0000 UTC m=+0.135038281 container health_status cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:09:25 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:09:25 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed with result 'exit-code'. Oct 5 05:09:25 localhost podman[111306]: 2025-10-05 09:09:25.241716573 +0000 UTC m=+0.149044499 container exec_died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 05:09:25 localhost podman[111306]: unhealthy Oct 5 05:09:25 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:09:25 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed with result 'exit-code'. Oct 5 05:09:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34794 DF PROTO=TCP SPT=45454 DPT=9882 SEQ=2531292387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0195D0000000001030307) Oct 5 05:09:33 localhost systemd[1]: tripleo_nova_virtqemud.service: State 'stop-sigterm' timed out. Killing. Oct 5 05:09:33 localhost systemd[1]: tripleo_nova_virtqemud.service: Killing process 62618 (conmon) with signal SIGKILL. Oct 5 05:09:33 localhost systemd[1]: tripleo_nova_virtqemud.service: Main process exited, code=killed, status=9/KILL Oct 5 05:09:33 localhost systemd[1]: libpod-conmon-0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56.scope: Deactivated successfully. Oct 5 05:09:33 localhost podman[111356]: error opening file `/run/crun/0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56/status`: No such file or directory Oct 5 05:09:33 localhost podman[111345]: 2025-10-05 09:09:33.423842243 +0000 UTC m=+0.078673427 container cleanup 0e57cf084c5383f4403669d34d07100c7b54b277e8d9698845be0ca5a6e33b56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, build-date=2025-07-21T14:56:59, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public) Oct 5 05:09:33 localhost podman[111345]: nova_virtqemud Oct 5 05:09:33 localhost systemd[1]: tripleo_nova_virtqemud.service: Failed with result 'timeout'. Oct 5 05:09:33 localhost systemd[1]: Stopped nova_virtqemud container. Oct 5 05:09:34 localhost python3.9[111449]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:09:34 localhost systemd[1]: Reloading. Oct 5 05:09:34 localhost systemd-sysv-generator[111479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:09:34 localhost systemd-rc-local-generator[111473]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:09:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:09:35 localhost python3.9[111579]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:09:35 localhost systemd[1]: Reloading. Oct 5 05:09:35 localhost systemd-sysv-generator[111612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:09:35 localhost systemd-rc-local-generator[111606]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:09:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:09:35 localhost systemd[1]: Stopping nova_virtsecretd container... Oct 5 05:09:35 localhost systemd[1]: libpod-022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2.scope: Deactivated successfully. Oct 5 05:09:35 localhost podman[111620]: 2025-10-05 09:09:35.926098429 +0000 UTC m=+0.051769680 container died 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtsecretd, release=2, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 05:09:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2-userdata-shm.mount: Deactivated successfully. Oct 5 05:09:35 localhost podman[111620]: 2025-10-05 09:09:35.965600986 +0000 UTC m=+0.091272227 container cleanup 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, version=17.1.9, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, release=2, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_virtsecretd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:09:35 localhost podman[111620]: nova_virtsecretd Oct 5 05:09:36 localhost podman[111635]: 2025-10-05 09:09:36.00458128 +0000 UTC m=+0.064852984 container cleanup 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, release=2, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, container_name=nova_virtsecretd, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 05:09:36 localhost systemd[1]: libpod-conmon-022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2.scope: Deactivated successfully. Oct 5 05:09:36 localhost podman[111662]: error opening file `/run/crun/022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2/status`: No such file or directory Oct 5 05:09:36 localhost podman[111650]: 2025-10-05 09:09:36.073702418 +0000 UTC m=+0.045611113 container cleanup 022ca653791af22fce6c24313535b65071a70b68a42ab746f2b4ada345e1eff2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, container_name=nova_virtsecretd, release=2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 05:09:36 localhost podman[111650]: nova_virtsecretd Oct 5 05:09:36 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Oct 5 05:09:36 localhost systemd[1]: Stopped nova_virtsecretd container. Oct 5 05:09:36 localhost python3.9[111755]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:09:36 localhost systemd[1]: var-lib-containers-storage-overlay-8161bfb3b20903755fd1dc15a4ab5bffc3b3459f22903df1eccf0b1b007c1ce9-merged.mount: Deactivated successfully. Oct 5 05:09:37 localhost systemd[1]: Reloading. Oct 5 05:09:38 localhost systemd-rc-local-generator[111779]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:09:38 localhost systemd-sysv-generator[111786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:09:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:09:38 localhost systemd[1]: Stopping nova_virtstoraged container... Oct 5 05:09:38 localhost systemd[1]: tmp-crun.GIWDwd.mount: Deactivated successfully. Oct 5 05:09:38 localhost systemd[1]: libpod-2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8.scope: Deactivated successfully. Oct 5 05:09:38 localhost podman[111796]: 2025-10-05 09:09:38.339058252 +0000 UTC m=+0.079135050 container died 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, container_name=nova_virtstoraged, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 05:09:38 localhost podman[111796]: 2025-10-05 09:09:38.384535091 +0000 UTC m=+0.124611869 container cleanup 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2025-07-21T14:56:59, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, container_name=nova_virtstoraged, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1) Oct 5 05:09:38 localhost podman[111796]: nova_virtstoraged Oct 5 05:09:38 localhost podman[111810]: 2025-10-05 09:09:38.420497953 +0000 UTC m=+0.069353396 container cleanup 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, managed_by=tripleo_ansible, container_name=nova_virtstoraged, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, release=2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, vcs-type=git) Oct 5 05:09:38 localhost systemd[1]: libpod-conmon-2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8.scope: Deactivated successfully. Oct 5 05:09:38 localhost podman[111839]: error opening file `/run/crun/2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8/status`: No such file or directory Oct 5 05:09:38 localhost podman[111828]: 2025-10-05 09:09:38.541570155 +0000 UTC m=+0.087798114 container cleanup 2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '012327e9705c184cfee14ca411150d67'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, container_name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=2, com.redhat.component=openstack-nova-libvirt-container) Oct 5 05:09:38 localhost podman[111828]: nova_virtstoraged Oct 5 05:09:38 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Oct 5 05:09:38 localhost systemd[1]: Stopped nova_virtstoraged container. Oct 5 05:09:39 localhost python3.9[111934]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:09:39 localhost systemd[1]: tmp-crun.nWzPHA.mount: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: var-lib-containers-storage-overlay-88862842d7fd5ac5a70ec544f0eb4e87207193b8951a2358f33d18a478f7bf30-merged.mount: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e15f4d8913f15d8c1c9d5ad53239252aee9a87365ddad1950e7e2433f854cd8-userdata-shm.mount: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: Reloading. Oct 5 05:09:39 localhost systemd-rc-local-generator[111961]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:09:39 localhost systemd-sysv-generator[111965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:09:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:09:39 localhost systemd[1]: Stopping ovn_controller container... Oct 5 05:09:39 localhost systemd[1]: libpod-14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.scope: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: libpod-14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.scope: Consumed 2.786s CPU time. Oct 5 05:09:39 localhost podman[111974]: 2025-10-05 09:09:39.789678176 +0000 UTC m=+0.114386352 container died 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, release=1, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64) Oct 5 05:09:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.timer: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc. Oct 5 05:09:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed to open /run/systemd/transient/14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: No such file or directory Oct 5 05:09:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc-userdata-shm.mount: Deactivated successfully. Oct 5 05:09:39 localhost podman[111974]: 2025-10-05 09:09:39.836490581 +0000 UTC m=+0.161198777 container cleanup 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, release=1, version=17.1.9, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:09:39 localhost podman[111974]: ovn_controller Oct 5 05:09:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.timer: Failed to open /run/systemd/transient/14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.timer: No such file or directory Oct 5 05:09:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed to open /run/systemd/transient/14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: No such file or directory Oct 5 05:09:39 localhost podman[111987]: 2025-10-05 09:09:39.864151598 +0000 UTC m=+0.066666363 container cleanup 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, version=17.1.9) Oct 5 05:09:39 localhost systemd[1]: libpod-conmon-14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.scope: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.timer: Failed to open /run/systemd/transient/14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.timer: No such file or directory Oct 5 05:09:39 localhost systemd[1]: 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: Failed to open /run/systemd/transient/14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc.service: No such file or directory Oct 5 05:09:39 localhost podman[112001]: 2025-10-05 09:09:39.95856425 +0000 UTC m=+0.064932755 container cleanup 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 05:09:39 localhost podman[112001]: ovn_controller Oct 5 05:09:39 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Oct 5 05:09:39 localhost systemd[1]: Stopped ovn_controller container. Oct 5 05:09:40 localhost systemd[1]: var-lib-containers-storage-overlay-2eba47c07ad7ef3fb2d2a15bf005a78b3ed021af3b500a4d670828fd298cd628-merged.mount: Deactivated successfully. Oct 5 05:09:40 localhost python3.9[112106]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:09:41 localhost systemd[1]: Reloading. Oct 5 05:09:41 localhost systemd-rc-local-generator[112131]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:09:41 localhost systemd-sysv-generator[112135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:09:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:09:42 localhost systemd[1]: Stopping ovn_metadata_agent container... Oct 5 05:09:42 localhost systemd[1]: libpod-cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.scope: Deactivated successfully. Oct 5 05:09:42 localhost systemd[1]: libpod-cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.scope: Consumed 12.152s CPU time. Oct 5 05:09:42 localhost podman[112146]: 2025-10-05 09:09:42.57606245 +0000 UTC m=+0.415906081 container died cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:09:42 localhost systemd[1]: tmp-crun.FlV4AS.mount: Deactivated successfully. Oct 5 05:09:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.timer: Deactivated successfully. Oct 5 05:09:42 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b. Oct 5 05:09:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed to open /run/systemd/transient/cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: No such file or directory Oct 5 05:09:42 localhost podman[112146]: 2025-10-05 09:09:42.652021092 +0000 UTC m=+0.491864723 container cleanup cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 05:09:42 localhost podman[112146]: ovn_metadata_agent Oct 5 05:09:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.timer: Failed to open /run/systemd/transient/cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.timer: No such file or directory Oct 5 05:09:42 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed to open /run/systemd/transient/cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: No such file or directory Oct 5 05:09:42 localhost podman[112159]: 2025-10-05 09:09:42.674772417 +0000 UTC m=+0.084017522 container cleanup cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:09:43 localhost systemd[1]: tmp-crun.W1zAq7.mount: Deactivated successfully. Oct 5 05:09:43 localhost systemd[1]: var-lib-containers-storage-overlay-9bdddd8ad241b770732fe4a7a44ffc977364417983f57ad1279078cf4c130f4b-merged.mount: Deactivated successfully. Oct 5 05:09:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b-userdata-shm.mount: Deactivated successfully. Oct 5 05:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2148 DF PROTO=TCP SPT=33210 DPT=9102 SEQ=2800829923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0535E0000000001030307) Oct 5 05:09:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2149 DF PROTO=TCP SPT=33210 DPT=9102 SEQ=2800829923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0575E0000000001030307) Oct 5 05:09:44 localhost podman[112278]: 2025-10-05 09:09:44.537240752 +0000 UTC m=+0.092212833 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 05:09:44 localhost podman[112278]: 2025-10-05 09:09:44.638807118 +0000 UTC m=+0.193779209 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Oct 5 05:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2150 DF PROTO=TCP SPT=33210 DPT=9102 SEQ=2800829923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED05F5D0000000001030307) Oct 5 05:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48520 DF PROTO=TCP SPT=56872 DPT=9100 SEQ=2187038219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED060EE0000000001030307) Oct 5 05:09:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48521 DF PROTO=TCP SPT=56872 DPT=9100 SEQ=2187038219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED064DE0000000001030307) Oct 5 05:09:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48522 DF PROTO=TCP SPT=56872 DPT=9100 SEQ=2187038219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED06CDD0000000001030307) Oct 5 05:09:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2151 DF PROTO=TCP SPT=33210 DPT=9102 SEQ=2800829923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED06F1D0000000001030307) Oct 5 05:09:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12436 DF PROTO=TCP SPT=38498 DPT=9105 SEQ=2940965353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0751F0000000001030307) Oct 5 05:09:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60928 DF PROTO=TCP SPT=57774 DPT=9882 SEQ=2853890247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED07EDD0000000001030307) Oct 5 05:09:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60929 DF PROTO=TCP SPT=57774 DPT=9882 SEQ=2853890247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED08E9D0000000001030307) Oct 5 05:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44688 DF PROTO=TCP SPT=50078 DPT=9102 SEQ=50425541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0C88D0000000001030307) Oct 5 05:10:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44689 DF PROTO=TCP SPT=50078 DPT=9102 SEQ=50425541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0CC9E0000000001030307) Oct 5 05:10:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44690 DF PROTO=TCP SPT=50078 DPT=9102 SEQ=50425541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0D49D0000000001030307) Oct 5 05:10:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64741 DF PROTO=TCP SPT=44504 DPT=9100 SEQ=609587717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0D61D0000000001030307) Oct 5 05:10:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64742 DF PROTO=TCP SPT=44504 DPT=9100 SEQ=609587717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0DA1D0000000001030307) Oct 5 05:10:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64743 DF PROTO=TCP SPT=44504 DPT=9100 SEQ=609587717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0E21E0000000001030307) Oct 5 05:10:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44691 DF PROTO=TCP SPT=50078 DPT=9102 SEQ=50425541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0E45D0000000001030307) Oct 5 05:10:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15851 DF PROTO=TCP SPT=44730 DPT=9105 SEQ=3474517774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0EA500000000001030307) Oct 5 05:10:24 localhost sshd[112421]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:10:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13518 DF PROTO=TCP SPT=44116 DPT=9882 SEQ=2590441537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED0F41D0000000001030307) Oct 5 05:10:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13519 DF PROTO=TCP SPT=44116 DPT=9882 SEQ=2590441537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED103DD0000000001030307) Oct 5 05:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41252 DF PROTO=TCP SPT=49550 DPT=9102 SEQ=3215957514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED13DBD0000000001030307) Oct 5 05:10:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41253 DF PROTO=TCP SPT=49550 DPT=9102 SEQ=3215957514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED141DD0000000001030307) Oct 5 05:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41254 DF PROTO=TCP SPT=49550 DPT=9102 SEQ=3215957514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED149DD0000000001030307) Oct 5 05:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59266 DF PROTO=TCP SPT=52880 DPT=9100 SEQ=1245065429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED14B4D0000000001030307) Oct 5 05:10:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59267 DF PROTO=TCP SPT=52880 DPT=9100 SEQ=1245065429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED14F5D0000000001030307) Oct 5 05:10:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59268 DF PROTO=TCP SPT=52880 DPT=9100 SEQ=1245065429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1575E0000000001030307) Oct 5 05:10:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41255 DF PROTO=TCP SPT=49550 DPT=9102 SEQ=3215957514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1599D0000000001030307) Oct 5 05:10:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16380 DF PROTO=TCP SPT=50302 DPT=9105 SEQ=3933325376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED15F7F0000000001030307) Oct 5 05:10:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55761 DF PROTO=TCP SPT=34148 DPT=9882 SEQ=337268556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1695D0000000001030307) Oct 5 05:10:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55762 DF PROTO=TCP SPT=34148 DPT=9882 SEQ=337268556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1791E0000000001030307) Oct 5 05:11:06 localhost systemd[1]: tripleo_ovn_metadata_agent.service: State 'stop-sigterm' timed out. Killing. Oct 5 05:11:06 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Killing process 72122 (conmon) with signal SIGKILL. Oct 5 05:11:06 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Main process exited, code=killed, status=9/KILL Oct 5 05:11:06 localhost systemd[1]: libpod-conmon-cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.scope: Deactivated successfully. Oct 5 05:11:06 localhost podman[112512]: error opening file `/run/crun/cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b/status`: No such file or directory Oct 5 05:11:06 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.timer: Failed to open /run/systemd/transient/cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.timer: No such file or directory Oct 5 05:11:06 localhost systemd[1]: cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: Failed to open /run/systemd/transient/cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b.service: No such file or directory Oct 5 05:11:06 localhost podman[112501]: 2025-10-05 09:11:06.954695087 +0000 UTC m=+0.100582380 container cleanup cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1) Oct 5 05:11:06 localhost podman[112501]: ovn_metadata_agent Oct 5 05:11:06 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Failed with result 'timeout'. Oct 5 05:11:06 localhost systemd[1]: Stopped ovn_metadata_agent container. Oct 5 05:11:07 localhost python3.9[112606]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:11:08 localhost systemd[1]: Reloading. Oct 5 05:11:08 localhost systemd-sysv-generator[112639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:11:08 localhost systemd-rc-local-generator[112636]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:11:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:11:10 localhost python3.9[112736]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:11 localhost python3.9[112828]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:12 localhost python3.9[112920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:12 localhost python3.9[113012]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:13 localhost python3.9[113104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55874 DF PROTO=TCP SPT=39558 DPT=9102 SEQ=3622955486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1B2ED0000000001030307) Oct 5 05:11:13 localhost python3.9[113196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55875 DF PROTO=TCP SPT=39558 DPT=9102 SEQ=3622955486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1B6DE0000000001030307) Oct 5 05:11:14 localhost python3.9[113288]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:15 localhost python3.9[113380]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:15 localhost python3.9[113472]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:16 localhost python3.9[113564]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55876 DF PROTO=TCP SPT=39558 DPT=9102 SEQ=3622955486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1BEDD0000000001030307) Oct 5 05:11:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21743 DF PROTO=TCP SPT=48588 DPT=9100 SEQ=3325044375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1C07D0000000001030307) Oct 5 05:11:16 localhost python3.9[113656]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:17 localhost python3.9[113748]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21744 DF PROTO=TCP SPT=48588 DPT=9100 SEQ=3325044375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1C49E0000000001030307) Oct 5 05:11:18 localhost python3.9[113840]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:18 localhost python3.9[113932]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:19 localhost python3.9[114024]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21745 DF PROTO=TCP SPT=48588 DPT=9100 SEQ=3325044375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1CC9D0000000001030307) Oct 5 05:11:20 localhost python3.9[114116]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55877 DF PROTO=TCP SPT=39558 DPT=9102 SEQ=3622955486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1CE9E0000000001030307) Oct 5 05:11:20 localhost python3.9[114208]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:21 localhost python3.9[114300]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:21 localhost python3.9[114392]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34047 DF PROTO=TCP SPT=41304 DPT=9105 SEQ=2403761874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1D4AF0000000001030307) Oct 5 05:11:22 localhost python3.9[114484]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:23 localhost python3.9[114576]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:23 localhost python3.9[114668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:24 localhost python3.9[114760]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2722 DF PROTO=TCP SPT=34916 DPT=9882 SEQ=4250245821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1DE5D0000000001030307) Oct 5 05:11:24 localhost python3.9[114852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:25 localhost python3.9[114944]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:26 localhost python3.9[115036]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:26 localhost python3.9[115128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:27 localhost python3.9[115220]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:27 localhost python3.9[115312]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:28 localhost python3.9[115404]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2723 DF PROTO=TCP SPT=34916 DPT=9882 SEQ=4250245821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED1EE1D0000000001030307) Oct 5 05:11:28 localhost python3.9[115496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:29 localhost python3.9[115588]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:30 localhost python3.9[115680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:30 localhost python3.9[115772]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:31 localhost python3.9[115864]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:31 localhost python3.9[115956]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:32 localhost python3.9[116048]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:33 localhost python3.9[116140]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:33 localhost python3.9[116232]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:34 localhost python3.9[116324]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:34 localhost python3.9[116416]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:35 localhost python3.9[116508]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:36 localhost python3.9[116600]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:37 localhost python3.9[116692]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:11:38 localhost python3.9[116784]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:11:38 localhost systemd[1]: Reloading. Oct 5 05:11:38 localhost systemd-sysv-generator[116814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:11:38 localhost systemd-rc-local-generator[116807]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:11:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:11:39 localhost python3.9[116912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:40 localhost python3.9[117005]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:40 localhost python3.9[117098]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:41 localhost python3.9[117191]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:43 localhost python3.9[117284]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31748 DF PROTO=TCP SPT=51494 DPT=9102 SEQ=4190413238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2281D0000000001030307) Oct 5 05:11:43 localhost python3.9[117377]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31749 DF PROTO=TCP SPT=51494 DPT=9102 SEQ=4190413238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED22C1D0000000001030307) Oct 5 05:11:44 localhost python3.9[117470]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:45 localhost python3.9[117563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:45 localhost python3.9[117656]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:46 localhost python3.9[117749]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31750 DF PROTO=TCP SPT=51494 DPT=9102 SEQ=4190413238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2341D0000000001030307) Oct 5 05:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10108 DF PROTO=TCP SPT=40100 DPT=9100 SEQ=2165874396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED235AD0000000001030307) Oct 5 05:11:47 localhost python3.9[117842]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:47 localhost python3.9[117935]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10109 DF PROTO=TCP SPT=40100 DPT=9100 SEQ=2165874396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2399E0000000001030307) Oct 5 05:11:48 localhost python3.9[118029]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:48 localhost python3.9[118166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:49 localhost python3.9[118276]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10110 DF PROTO=TCP SPT=40100 DPT=9100 SEQ=2165874396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2419D0000000001030307) Oct 5 05:11:50 localhost python3.9[118384]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31751 DF PROTO=TCP SPT=51494 DPT=9102 SEQ=4190413238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED243DD0000000001030307) Oct 5 05:11:50 localhost python3.9[118477]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:51 localhost python3.9[118570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:51 localhost python3.9[118663]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31939 DF PROTO=TCP SPT=54186 DPT=9105 SEQ=3692026806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED249DF0000000001030307) Oct 5 05:11:52 localhost python3.9[118756]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:53 localhost python3.9[118849]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34339 DF PROTO=TCP SPT=50784 DPT=9882 SEQ=2734069558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2539D0000000001030307) Oct 5 05:11:54 localhost systemd[1]: session-38.scope: Deactivated successfully. Oct 5 05:11:54 localhost systemd[1]: session-38.scope: Consumed 50.476s CPU time. Oct 5 05:11:54 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Oct 5 05:11:54 localhost systemd-logind[760]: Removed session 38. Oct 5 05:11:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34340 DF PROTO=TCP SPT=50784 DPT=9882 SEQ=2734069558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2635D0000000001030307) Oct 5 05:12:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48588 DF PROTO=TCP SPT=57922 DPT=9102 SEQ=1390756520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED29D4E0000000001030307) Oct 5 05:12:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48589 DF PROTO=TCP SPT=57922 DPT=9102 SEQ=1390756520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2A15D0000000001030307) Oct 5 05:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48590 DF PROTO=TCP SPT=57922 DPT=9102 SEQ=1390756520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2A95E0000000001030307) Oct 5 05:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28577 DF PROTO=TCP SPT=55390 DPT=9100 SEQ=36647971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2AADE0000000001030307) Oct 5 05:12:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28578 DF PROTO=TCP SPT=55390 DPT=9100 SEQ=36647971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2AEDD0000000001030307) Oct 5 05:12:19 localhost sshd[118865]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:12:19 localhost systemd-logind[760]: New session 39 of user zuul. Oct 5 05:12:19 localhost systemd[1]: Started Session 39 of User zuul. Oct 5 05:12:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28579 DF PROTO=TCP SPT=55390 DPT=9100 SEQ=36647971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2B6DD0000000001030307) Oct 5 05:12:20 localhost python3.9[118958]: ansible-ansible.legacy.ping Invoked with data=pong Oct 5 05:12:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48591 DF PROTO=TCP SPT=57922 DPT=9102 SEQ=1390756520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2B91E0000000001030307) Oct 5 05:12:21 localhost python3.9[119062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:12:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27485 DF PROTO=TCP SPT=43280 DPT=9105 SEQ=1612154796 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2BF120000000001030307) Oct 5 05:12:22 localhost python3.9[119154]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:12:23 localhost python3.9[119247]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:12:24 localhost python3.9[119339]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:12:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58546 DF PROTO=TCP SPT=52614 DPT=9882 SEQ=422044470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2C8DD0000000001030307) Oct 5 05:12:24 localhost python3.9[119431]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:12:25 localhost python3.9[119504]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655544.2875233-177-98014613928026/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:12:26 localhost python3.9[119596]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:12:27 localhost python3.9[119692]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:12:27 localhost python3.9[119782]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:12:28 localhost network[119799]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:12:28 localhost network[119800]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:12:28 localhost network[119801]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:12:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58547 DF PROTO=TCP SPT=52614 DPT=9882 SEQ=422044470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED2D89D0000000001030307) Oct 5 05:12:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:12:33 localhost python3.9[119998]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:12:34 localhost python3.9[120088]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:12:35 localhost python3.9[120184]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18822 DF PROTO=TCP SPT=52566 DPT=9102 SEQ=1757362548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3127D0000000001030307) Oct 5 05:12:44 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 05:12:44 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 05:12:44 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 05:12:44 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 05:12:44 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 05:12:44 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:44 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:44 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:44 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 05:12:44 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 05:12:44 localhost sshd[120227]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:12:44 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 05:12:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18823 DF PROTO=TCP SPT=52566 DPT=9102 SEQ=1757362548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3169D0000000001030307) Oct 5 05:12:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:12:44 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 05:12:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:12:45 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 05:12:45 localhost systemd[1]: run-red85aa64e49f4ff7b04ffcf095f3ec90.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: run-r0756cc68d7274827ba227077c12676f2.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 05:12:45 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 05:12:45 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 05:12:45 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 05:12:45 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:45 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:45 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:45 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 05:12:45 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 05:12:45 localhost sshd[120398]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:12:45 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 05:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18824 DF PROTO=TCP SPT=52566 DPT=9102 SEQ=1757362548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED31E9E0000000001030307) Oct 5 05:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7206 DF PROTO=TCP SPT=50366 DPT=9100 SEQ=320949723 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3200D0000000001030307) Oct 5 05:12:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7207 DF PROTO=TCP SPT=50366 DPT=9100 SEQ=320949723 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3241D0000000001030307) Oct 5 05:12:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7208 DF PROTO=TCP SPT=50366 DPT=9100 SEQ=320949723 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED32C1D0000000001030307) Oct 5 05:12:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18825 DF PROTO=TCP SPT=52566 DPT=9102 SEQ=1757362548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED32E5D0000000001030307) Oct 5 05:12:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57324 DF PROTO=TCP SPT=35512 DPT=9105 SEQ=1569336559 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3343F0000000001030307) Oct 5 05:12:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61816 DF PROTO=TCP SPT=50000 DPT=9882 SEQ=3149488057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED33E1D0000000001030307) Oct 5 05:12:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61817 DF PROTO=TCP SPT=50000 DPT=9882 SEQ=3149488057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED34DDD0000000001030307) Oct 5 05:13:05 localhost sshd[120585]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61321 DF PROTO=TCP SPT=42846 DPT=9102 SEQ=3028328036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED387AD0000000001030307) Oct 5 05:13:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61322 DF PROTO=TCP SPT=42846 DPT=9102 SEQ=3028328036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED38B9E0000000001030307) Oct 5 05:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61323 DF PROTO=TCP SPT=42846 DPT=9102 SEQ=3028328036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3939E0000000001030307) Oct 5 05:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53382 DF PROTO=TCP SPT=59834 DPT=9100 SEQ=3395688070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3953D0000000001030307) Oct 5 05:13:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53383 DF PROTO=TCP SPT=59834 DPT=9100 SEQ=3395688070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3995D0000000001030307) Oct 5 05:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53384 DF PROTO=TCP SPT=59834 DPT=9100 SEQ=3395688070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3A15D0000000001030307) Oct 5 05:13:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61324 DF PROTO=TCP SPT=42846 DPT=9102 SEQ=3028328036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3A35E0000000001030307) Oct 5 05:13:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63964 DF PROTO=TCP SPT=48066 DPT=9105 SEQ=3728499771 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3A96F0000000001030307) Oct 5 05:13:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14355 DF PROTO=TCP SPT=49448 DPT=9882 SEQ=2227332138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3B31D0000000001030307) Oct 5 05:13:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14356 DF PROTO=TCP SPT=49448 DPT=9882 SEQ=2227332138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3C2DE0000000001030307) Oct 5 05:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27163 DF PROTO=TCP SPT=35544 DPT=9102 SEQ=3343778745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED3FCDE0000000001030307) Oct 5 05:13:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27164 DF PROTO=TCP SPT=35544 DPT=9102 SEQ=3343778745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED400DD0000000001030307) Oct 5 05:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27165 DF PROTO=TCP SPT=35544 DPT=9102 SEQ=3343778745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED408DD0000000001030307) Oct 5 05:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18090 DF PROTO=TCP SPT=55898 DPT=9100 SEQ=1719745347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED40A6D0000000001030307) Oct 5 05:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18091 DF PROTO=TCP SPT=55898 DPT=9100 SEQ=1719745347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED40E5E0000000001030307) Oct 5 05:13:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18092 DF PROTO=TCP SPT=55898 DPT=9100 SEQ=1719745347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4165D0000000001030307) Oct 5 05:13:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27166 DF PROTO=TCP SPT=35544 DPT=9102 SEQ=3343778745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4189D0000000001030307) Oct 5 05:13:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39988 DF PROTO=TCP SPT=55648 DPT=9105 SEQ=1747023912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED41E9F0000000001030307) Oct 5 05:13:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10668 DF PROTO=TCP SPT=52240 DPT=9882 SEQ=3838138604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4285D0000000001030307) Oct 5 05:13:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=16 res=1 Oct 5 05:13:56 localhost kernel: SELinux: Converting 2754 SID table entries... Oct 5 05:13:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:13:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:13:57 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=17 res=1 Oct 5 05:13:58 localhost python3.9[121275]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:13:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10669 DF PROTO=TCP SPT=52240 DPT=9882 SEQ=3838138604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4381D0000000001030307) Oct 5 05:13:58 localhost python3.9[121367]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:13:59 localhost python3.9[121440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655638.3359847-399-71151352246263/.source.fact _original_basename=.vknqs6wq follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:14:00 localhost python3.9[121530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:14:01 localhost python3.9[121628]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:14:02 localhost python3.9[121682]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:14:05 localhost systemd[1]: Reloading. Oct 5 05:14:06 localhost systemd-rc-local-generator[121714]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:14:06 localhost systemd-sysv-generator[121717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:14:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:14:06 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 05:14:07 localhost python3.9[121822]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:09 localhost python3.9[122061]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Oct 5 05:14:10 localhost python3.9[122153]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Oct 5 05:14:11 localhost python3.9[122246]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:14:12 localhost python3.9[122338]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Oct 5 05:14:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37561 DF PROTO=TCP SPT=35236 DPT=9102 SEQ=1131073149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4720D0000000001030307) Oct 5 05:14:14 localhost python3.9[122430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37562 DF PROTO=TCP SPT=35236 DPT=9102 SEQ=1131073149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4761D0000000001030307) Oct 5 05:14:14 localhost python3.9[122522]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:14:15 localhost python3.9[122595]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655654.2995188-723-279170686051837/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37563 DF PROTO=TCP SPT=35236 DPT=9102 SEQ=1131073149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED47E1D0000000001030307) Oct 5 05:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59393 DF PROTO=TCP SPT=50810 DPT=9100 SEQ=188969721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED47F9C0000000001030307) Oct 5 05:14:16 localhost python3.9[122687]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Oct 5 05:14:17 localhost python3.9[122780]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Oct 5 05:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59394 DF PROTO=TCP SPT=50810 DPT=9100 SEQ=188969721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4839D0000000001030307) Oct 5 05:14:18 localhost python3.9[122873]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:14:19 localhost python3.9[122971]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Oct 5 05:14:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59395 DF PROTO=TCP SPT=50810 DPT=9100 SEQ=188969721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED48B9E0000000001030307) Oct 5 05:14:20 localhost python3.9[123063]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:14:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37564 DF PROTO=TCP SPT=35236 DPT=9102 SEQ=1131073149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED48DDE0000000001030307) Oct 5 05:14:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50122 DF PROTO=TCP SPT=35126 DPT=9105 SEQ=1738576852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED493CF0000000001030307) Oct 5 05:14:23 localhost python3.9[123157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61060 DF PROTO=TCP SPT=46778 DPT=9882 SEQ=2248528055 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED49D9D0000000001030307) Oct 5 05:14:24 localhost python3.9[123249]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:14:25 localhost python3.9[123322]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655664.0993886-966-134564334850017/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:26 localhost python3.9[123414]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:14:26 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 05:14:26 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 05:14:26 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 05:14:26 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 05:14:26 localhost systemd-modules-load[123418]: Module 'msr' is built in Oct 5 05:14:26 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 05:14:27 localhost python3.9[123512]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:14:28 localhost python3.9[123585]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655667.4055443-1035-151895211898665/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61061 DF PROTO=TCP SPT=46778 DPT=9882 SEQ=2248528055 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4AD5D0000000001030307) Oct 5 05:14:33 localhost python3.9[123677]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:14:39 localhost python3.9[123769]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:14:39 localhost python3.9[123861]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Oct 5 05:14:40 localhost python3.9[123951]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:14:41 localhost python3.9[124043]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:14:41 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 5 05:14:41 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 5 05:14:41 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 5 05:14:41 localhost systemd[1]: tuned.service: Consumed 2.016s CPU time, no IO. Oct 5 05:14:41 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 05:14:43 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 05:14:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21481 DF PROTO=TCP SPT=39012 DPT=9102 SEQ=3573632107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4E73D0000000001030307) Oct 5 05:14:43 localhost python3.9[124145]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Oct 5 05:14:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21482 DF PROTO=TCP SPT=39012 DPT=9102 SEQ=3573632107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4EB5D0000000001030307) Oct 5 05:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21483 DF PROTO=TCP SPT=39012 DPT=9102 SEQ=3573632107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4F35D0000000001030307) Oct 5 05:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11095 DF PROTO=TCP SPT=52318 DPT=9100 SEQ=3594446504 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4F4CC0000000001030307) Oct 5 05:14:47 localhost python3.9[124237]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:14:47 localhost systemd[1]: Reloading. Oct 5 05:14:47 localhost systemd-sysv-generator[124264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:14:47 localhost systemd-rc-local-generator[124261]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:14:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:14:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11096 DF PROTO=TCP SPT=52318 DPT=9100 SEQ=3594446504 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED4F8DD0000000001030307) Oct 5 05:14:48 localhost python3.9[124367]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:14:48 localhost systemd[1]: Reloading. Oct 5 05:14:48 localhost systemd-rc-local-generator[124390]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:14:48 localhost systemd-sysv-generator[124395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:14:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:14:49 localhost python3.9[124497]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11097 DF PROTO=TCP SPT=52318 DPT=9100 SEQ=3594446504 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED500DD0000000001030307) Oct 5 05:14:50 localhost python3.9[124590]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:50 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Oct 5 05:14:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21484 DF PROTO=TCP SPT=39012 DPT=9102 SEQ=3573632107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5031D0000000001030307) Oct 5 05:14:51 localhost python3.9[124683]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4525 DF PROTO=TCP SPT=37552 DPT=9105 SEQ=4072116773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED508FF0000000001030307) Oct 5 05:14:52 localhost python3.9[124782]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:53 localhost python3.9[124875]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:14:53 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 5 05:14:53 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 5 05:14:53 localhost systemd[1]: Stopping Apply Kernel Variables... Oct 5 05:14:53 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 05:14:53 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 5 05:14:53 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 05:14:54 localhost systemd[1]: session-39.scope: Deactivated successfully. Oct 5 05:14:54 localhost systemd[1]: session-39.scope: Consumed 1min 55.350s CPU time. Oct 5 05:14:54 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Oct 5 05:14:54 localhost systemd-logind[760]: Removed session 39. Oct 5 05:14:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58910 DF PROTO=TCP SPT=52800 DPT=9882 SEQ=113782724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED512DD0000000001030307) Oct 5 05:14:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58911 DF PROTO=TCP SPT=52800 DPT=9882 SEQ=113782724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5229D0000000001030307) Oct 5 05:14:59 localhost sshd[125006]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:14:59 localhost systemd-logind[760]: New session 40 of user zuul. Oct 5 05:14:59 localhost systemd[1]: Started Session 40 of User zuul. Oct 5 05:15:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:15:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:15:00 localhost python3.9[125099]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:02 localhost python3.9[125208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:03 localhost python3.9[125304]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:15:04 localhost python3.9[125395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:15:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5451 writes, 24K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5451 writes, 723 syncs, 7.54 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:15:05 localhost python3.9[125491]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:15:06 localhost python3.9[125545]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:15:10 localhost python3.9[125639]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:15:11 localhost python3.9[125794]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:12 localhost python3.9[125886]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:15:13 localhost python3.9[125989]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44204 DF PROTO=TCP SPT=50260 DPT=9102 SEQ=3452034552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED55C6D0000000001030307) Oct 5 05:15:13 localhost python3.9[126037]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:14 localhost python3.9[126129]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:15:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44205 DF PROTO=TCP SPT=50260 DPT=9102 SEQ=3452034552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5605D0000000001030307) Oct 5 05:15:15 localhost python3.9[126202]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655713.7657251-323-269613412716529/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:15 localhost python3.9[126294]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:15 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 5 05:15:15 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:15:15 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:15:16 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:15:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44206 DF PROTO=TCP SPT=50260 DPT=9102 SEQ=3452034552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5685E0000000001030307) Oct 5 05:15:16 localhost python3.9[126387]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44797 DF PROTO=TCP SPT=43048 DPT=9100 SEQ=2032853616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED569FD0000000001030307) Oct 5 05:15:17 localhost python3.9[126479]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:17 localhost python3.9[126571]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44798 DF PROTO=TCP SPT=43048 DPT=9100 SEQ=2032853616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED56E1E0000000001030307) Oct 5 05:15:18 localhost python3.9[126661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:19 localhost python3.9[126755]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44799 DF PROTO=TCP SPT=43048 DPT=9100 SEQ=2032853616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5761D0000000001030307) Oct 5 05:15:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44207 DF PROTO=TCP SPT=50260 DPT=9102 SEQ=3452034552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5781D0000000001030307) Oct 5 05:15:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61845 DF PROTO=TCP SPT=59798 DPT=9105 SEQ=645087020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED57E2F0000000001030307) Oct 5 05:15:23 localhost python3.9[126849]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2286 DF PROTO=TCP SPT=50392 DPT=9882 SEQ=984281436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED587DD0000000001030307) Oct 5 05:15:27 localhost python3.9[126943]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2287 DF PROTO=TCP SPT=50392 DPT=9882 SEQ=984281436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5979D0000000001030307) Oct 5 05:15:31 localhost python3.9[127043]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:35 localhost python3.9[127137]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:39 localhost python3.9[127231]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:43 localhost python3.9[127325]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42172 DF PROTO=TCP SPT=43190 DPT=9102 SEQ=1443495786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5D19D0000000001030307) Oct 5 05:15:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42173 DF PROTO=TCP SPT=43190 DPT=9102 SEQ=1443495786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5D59E0000000001030307) Oct 5 05:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42174 DF PROTO=TCP SPT=43190 DPT=9102 SEQ=1443495786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5DD9D0000000001030307) Oct 5 05:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44285 DF PROTO=TCP SPT=53276 DPT=9100 SEQ=2973617304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5DF2C0000000001030307) Oct 5 05:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44286 DF PROTO=TCP SPT=53276 DPT=9100 SEQ=2973617304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5E31D0000000001030307) Oct 5 05:15:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44287 DF PROTO=TCP SPT=53276 DPT=9100 SEQ=2973617304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5EB1D0000000001030307) Oct 5 05:15:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42175 DF PROTO=TCP SPT=43190 DPT=9102 SEQ=1443495786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5ED5E0000000001030307) Oct 5 05:15:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52017 DF PROTO=TCP SPT=41412 DPT=9105 SEQ=1932956589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5F35F0000000001030307) Oct 5 05:15:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46396 DF PROTO=TCP SPT=48824 DPT=9882 SEQ=1444615377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED5FD1D0000000001030307) Oct 5 05:15:54 localhost python3.9[127495]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:55 localhost python3.9[127600]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:15:56 localhost python3.9[127673]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759655755.0206816-719-137148753549194/.source.json _original_basename=.kdtthxpg follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:57 localhost python3.9[127765]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:15:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46397 DF PROTO=TCP SPT=48824 DPT=9882 SEQ=1444615377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED60CDD0000000001030307) Oct 5 05:16:03 localhost podman[127778]: 2025-10-05 09:15:57.445900869 +0000 UTC m=+0.043208668 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 5 05:16:03 localhost podman[127983]: Oct 5 05:16:03 localhost podman[127983]: 2025-10-05 09:16:03.288754298 +0000 UTC m=+0.073606810 container create 5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_hoover, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Oct 5 05:16:03 localhost systemd[1]: Started libpod-conmon-5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303.scope. Oct 5 05:16:03 localhost systemd[1]: Started libcrun container. Oct 5 05:16:03 localhost podman[127983]: 2025-10-05 09:16:03.254476402 +0000 UTC m=+0.039328964 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:16:03 localhost podman[127983]: 2025-10-05 09:16:03.363887238 +0000 UTC m=+0.148739780 container init 5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_hoover, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 5 05:16:03 localhost podman[127983]: 2025-10-05 09:16:03.373993061 +0000 UTC m=+0.158845583 container start 5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_hoover, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , version=7, ceph=True, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:16:03 localhost podman[127983]: 2025-10-05 09:16:03.374261558 +0000 UTC m=+0.159114070 container attach 5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_hoover, vcs-type=git, release=553, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph) Oct 5 05:16:03 localhost zealous_hoover[128020]: 167 167 Oct 5 05:16:03 localhost systemd[1]: libpod-5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303.scope: Deactivated successfully. Oct 5 05:16:03 localhost podman[127983]: 2025-10-05 09:16:03.379457289 +0000 UTC m=+0.164309811 container died 5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_hoover, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 05:16:03 localhost podman[128033]: 2025-10-05 09:16:03.499936315 +0000 UTC m=+0.108128473 container remove 5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_hoover, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, version=7, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, release=553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64) Oct 5 05:16:03 localhost systemd[1]: libpod-conmon-5c53ae547363df59d55b2c942a6a15e0dfd2a365e9b8e4e2fcf3c2789a379303.scope: Deactivated successfully. Oct 5 05:16:03 localhost podman[128075]: Oct 5 05:16:03 localhost podman[128075]: 2025-10-05 09:16:03.721648737 +0000 UTC m=+0.079018037 container create 4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_kilby, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, release=553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 5 05:16:03 localhost systemd[1]: Started libpod-conmon-4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700.scope. Oct 5 05:16:03 localhost systemd[1]: Started libcrun container. Oct 5 05:16:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e92110254c16352221d03c7dc75dcd59efc24e6f8344a365e6ce34a4d4ed460/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 05:16:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e92110254c16352221d03c7dc75dcd59efc24e6f8344a365e6ce34a4d4ed460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:16:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e92110254c16352221d03c7dc75dcd59efc24e6f8344a365e6ce34a4d4ed460/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:16:03 localhost podman[128075]: 2025-10-05 09:16:03.78170469 +0000 UTC m=+0.139073980 container init 4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_kilby, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, version=7, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, RELEASE=main, name=rhceph) Oct 5 05:16:03 localhost podman[128075]: 2025-10-05 09:16:03.69068404 +0000 UTC m=+0.048053370 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:16:03 localhost podman[128075]: 2025-10-05 09:16:03.792609315 +0000 UTC m=+0.149978635 container start 4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_kilby, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, architecture=x86_64, maintainer=Guillaume Abrioux ) Oct 5 05:16:03 localhost podman[128075]: 2025-10-05 09:16:03.792991205 +0000 UTC m=+0.150360505 container attach 4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_kilby, release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:16:04 localhost systemd[1]: var-lib-containers-storage-overlay-42abba0ce58e2f04b188bba15404d4859c5d43a78905db46d9e62e3e2bdbb746-merged.mount: Deactivated successfully. Oct 5 05:16:04 localhost python3.9[128380]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:04 localhost recursing_kilby[128091]: [ Oct 5 05:16:04 localhost recursing_kilby[128091]: { Oct 5 05:16:04 localhost recursing_kilby[128091]: "available": false, Oct 5 05:16:04 localhost recursing_kilby[128091]: "ceph_device": false, Oct 5 05:16:04 localhost recursing_kilby[128091]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 05:16:04 localhost recursing_kilby[128091]: "lsm_data": {}, Oct 5 05:16:04 localhost recursing_kilby[128091]: "lvs": [], Oct 5 05:16:04 localhost recursing_kilby[128091]: "path": "/dev/sr0", Oct 5 05:16:04 localhost recursing_kilby[128091]: "rejected_reasons": [ Oct 5 05:16:04 localhost recursing_kilby[128091]: "Has a FileSystem", Oct 5 05:16:04 localhost recursing_kilby[128091]: "Insufficient space (<5GB)" Oct 5 05:16:04 localhost recursing_kilby[128091]: ], Oct 5 05:16:04 localhost recursing_kilby[128091]: "sys_api": { Oct 5 05:16:04 localhost recursing_kilby[128091]: "actuators": null, Oct 5 05:16:04 localhost recursing_kilby[128091]: "device_nodes": "sr0", Oct 5 05:16:04 localhost recursing_kilby[128091]: "human_readable_size": "482.00 KB", Oct 5 05:16:04 localhost recursing_kilby[128091]: "id_bus": "ata", Oct 5 05:16:04 localhost recursing_kilby[128091]: "model": "QEMU DVD-ROM", Oct 5 05:16:04 localhost recursing_kilby[128091]: "nr_requests": "2", Oct 5 05:16:04 localhost recursing_kilby[128091]: "partitions": {}, Oct 5 05:16:04 localhost recursing_kilby[128091]: "path": "/dev/sr0", Oct 5 05:16:04 localhost recursing_kilby[128091]: "removable": "1", Oct 5 05:16:04 localhost recursing_kilby[128091]: "rev": "2.5+", Oct 5 05:16:04 localhost recursing_kilby[128091]: "ro": "0", Oct 5 05:16:04 localhost recursing_kilby[128091]: "rotational": "1", Oct 5 05:16:04 localhost recursing_kilby[128091]: "sas_address": "", Oct 5 05:16:04 localhost recursing_kilby[128091]: "sas_device_handle": "", Oct 5 05:16:04 localhost recursing_kilby[128091]: "scheduler_mode": "mq-deadline", Oct 5 05:16:04 localhost recursing_kilby[128091]: "sectors": 0, Oct 5 05:16:04 localhost recursing_kilby[128091]: "sectorsize": "2048", Oct 5 05:16:04 localhost recursing_kilby[128091]: "size": 493568.0, Oct 5 05:16:04 localhost recursing_kilby[128091]: "support_discard": "0", Oct 5 05:16:04 localhost recursing_kilby[128091]: "type": "disk", Oct 5 05:16:04 localhost recursing_kilby[128091]: "vendor": "QEMU" Oct 5 05:16:04 localhost recursing_kilby[128091]: } Oct 5 05:16:04 localhost recursing_kilby[128091]: } Oct 5 05:16:04 localhost recursing_kilby[128091]: ] Oct 5 05:16:04 localhost systemd[1]: libpod-4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700.scope: Deactivated successfully. Oct 5 05:16:04 localhost podman[128075]: 2025-10-05 09:16:04.725499407 +0000 UTC m=+1.082868697 container died 4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_kilby, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, release=553, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True) Oct 5 05:16:04 localhost systemd[1]: var-lib-containers-storage-overlay-7e92110254c16352221d03c7dc75dcd59efc24e6f8344a365e6ce34a4d4ed460-merged.mount: Deactivated successfully. Oct 5 05:16:04 localhost podman[129779]: 2025-10-05 09:16:04.802372275 +0000 UTC m=+0.062683185 container remove 4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_kilby, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, ceph=True, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.33.12, RELEASE=main) Oct 5 05:16:04 localhost systemd[1]: libpod-conmon-4b4133ac81c563e98bd75d1405f3fce62602ad4e46320ed3d1b9d68b41b87700.scope: Deactivated successfully. Oct 5 05:16:10 localhost sshd[129833]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:16:12 localhost podman[129703]: 2025-10-05 09:16:04.727674486 +0000 UTC m=+0.045487431 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28886 DF PROTO=TCP SPT=51436 DPT=9102 SEQ=1117675025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED646CD0000000001030307) Oct 5 05:16:13 localhost python3.9[129999]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28887 DF PROTO=TCP SPT=51436 DPT=9102 SEQ=1117675025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED64ADD0000000001030307) Oct 5 05:16:15 localhost podman[130013]: 2025-10-05 09:16:13.721821431 +0000 UTC m=+0.043191278 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 5 05:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28888 DF PROTO=TCP SPT=51436 DPT=9102 SEQ=1117675025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED652DD0000000001030307) Oct 5 05:16:16 localhost python3.9[130174]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56483 DF PROTO=TCP SPT=48468 DPT=9100 SEQ=3792915119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6545D0000000001030307) Oct 5 05:16:17 localhost podman[130187]: 2025-10-05 09:16:16.790277729 +0000 UTC m=+0.046133508 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56484 DF PROTO=TCP SPT=48468 DPT=9100 SEQ=3792915119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6585E0000000001030307) Oct 5 05:16:18 localhost python3.9[130351]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56485 DF PROTO=TCP SPT=48468 DPT=9100 SEQ=3792915119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6605D0000000001030307) Oct 5 05:16:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28889 DF PROTO=TCP SPT=51436 DPT=9102 SEQ=1117675025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6629D0000000001030307) Oct 5 05:16:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38334 DF PROTO=TCP SPT=55762 DPT=9105 SEQ=227973574 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6688F0000000001030307) Oct 5 05:16:22 localhost podman[130364]: 2025-10-05 09:16:19.049011053 +0000 UTC m=+0.043922368 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Oct 5 05:16:23 localhost python3.9[130542]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41225 DF PROTO=TCP SPT=47824 DPT=9882 SEQ=405108405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6725D0000000001030307) Oct 5 05:16:24 localhost podman[130554]: 2025-10-05 09:16:23.389704574 +0000 UTC m=+0.081739800 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Oct 5 05:16:26 localhost systemd[1]: session-40.scope: Deactivated successfully. Oct 5 05:16:26 localhost systemd[1]: session-40.scope: Consumed 1min 28.717s CPU time. Oct 5 05:16:26 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Oct 5 05:16:26 localhost systemd-logind[760]: Removed session 40. Oct 5 05:16:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41226 DF PROTO=TCP SPT=47824 DPT=9882 SEQ=405108405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6821D0000000001030307) Oct 5 05:16:31 localhost sshd[130663]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:16:31 localhost systemd-logind[760]: New session 41 of user zuul. Oct 5 05:16:31 localhost systemd[1]: Started Session 41 of User zuul. Oct 5 05:16:32 localhost python3.9[130756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:16:36 localhost sshd[130930]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:16:37 localhost python3.9[130929]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Oct 5 05:16:38 localhost python3.9[131024]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:16:39 localhost python3.9[131078]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:16:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41917 DF PROTO=TCP SPT=41982 DPT=9102 SEQ=208205554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6BBFE0000000001030307) Oct 5 05:16:43 localhost python3.9[131428]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:16:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41918 DF PROTO=TCP SPT=41982 DPT=9102 SEQ=208205554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6C01D0000000001030307) Oct 5 05:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41919 DF PROTO=TCP SPT=41982 DPT=9102 SEQ=208205554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6C81D0000000001030307) Oct 5 05:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19600 DF PROTO=TCP SPT=44076 DPT=9100 SEQ=847000806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6C98C0000000001030307) Oct 5 05:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19601 DF PROTO=TCP SPT=44076 DPT=9100 SEQ=847000806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6CD9D0000000001030307) Oct 5 05:16:48 localhost python3.9[131525]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:16:49 localhost python3.9[131618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19602 DF PROTO=TCP SPT=44076 DPT=9100 SEQ=847000806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6D59D0000000001030307) Oct 5 05:16:50 localhost python3.9[131710]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Oct 5 05:16:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41920 DF PROTO=TCP SPT=41982 DPT=9102 SEQ=208205554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6D7DD0000000001030307) Oct 5 05:16:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44848 DF PROTO=TCP SPT=47972 DPT=9105 SEQ=1693169121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6DDBF0000000001030307) Oct 5 05:16:52 localhost kernel: SELinux: Converting 2756 SID table entries... Oct 5 05:16:52 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:16:52 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:16:53 localhost python3.9[132386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:16:54 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=18 res=1 Oct 5 05:16:54 localhost python3.9[132484]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:16:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30295 DF PROTO=TCP SPT=33580 DPT=9882 SEQ=2777257444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6E79D0000000001030307) Oct 5 05:16:58 localhost python3.9[132578]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:16:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30296 DF PROTO=TCP SPT=33580 DPT=9882 SEQ=2777257444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED6F75D0000000001030307) Oct 5 05:17:00 localhost python3.9[132823]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:17:01 localhost python3.9[132913]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:17:02 localhost python3.9[133007]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:17:05 localhost python3.9[133101]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:17:09 localhost python3.9[133271]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 05:17:10 localhost systemd[1]: Reloading. Oct 5 05:17:10 localhost systemd-sysv-generator[133302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:17:10 localhost systemd-rc-local-generator[133298]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:17:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:17:11 localhost python3.9[133403]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:17:13 localhost python3.9[133495]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9622 DF PROTO=TCP SPT=49172 DPT=9102 SEQ=2185203000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7312E0000000001030307) Oct 5 05:17:13 localhost python3.9[133589]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9623 DF PROTO=TCP SPT=49172 DPT=9102 SEQ=2185203000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7351E0000000001030307) Oct 5 05:17:14 localhost python3.9[133681]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:15 localhost python3.9[133773]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:15 localhost python3.9[133846]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655834.7533576-563-59188291708002/.source _original_basename=.7b8byf0z follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9624 DF PROTO=TCP SPT=49172 DPT=9102 SEQ=2185203000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED73D1D0000000001030307) Oct 5 05:17:16 localhost python3.9[133938]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28048 DF PROTO=TCP SPT=59798 DPT=9100 SEQ=2982377685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED73EBC0000000001030307) Oct 5 05:17:17 localhost python3.9[134030]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Oct 5 05:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28049 DF PROTO=TCP SPT=59798 DPT=9100 SEQ=2982377685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED742DD0000000001030307) Oct 5 05:17:18 localhost python3.9[134122]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:19 localhost python3.9[134214]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:19 localhost python3.9[134287]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655838.6023726-689-268068071349979/.source.yaml _original_basename=.ahc9ix74 follow=False checksum=0cadac3cfc033a4e07cfac59b43f6459e787700a force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28050 DF PROTO=TCP SPT=59798 DPT=9100 SEQ=2982377685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED74ADD0000000001030307) Oct 5 05:17:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9625 DF PROTO=TCP SPT=49172 DPT=9102 SEQ=2185203000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED74CDE0000000001030307) Oct 5 05:17:20 localhost python3.9[134379]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Oct 5 05:17:21 localhost ansible-async_wrapper.py[134484]: Invoked with j794212280854 300 /home/zuul/.ansible/tmp/ansible-tmp-1759655840.9581475-761-163223727666823/AnsiballZ_edpm_os_net_config.py _ Oct 5 05:17:21 localhost ansible-async_wrapper.py[134487]: Starting module and watcher Oct 5 05:17:21 localhost ansible-async_wrapper.py[134487]: Start watching 134488 (300) Oct 5 05:17:21 localhost ansible-async_wrapper.py[134488]: Start module (134488) Oct 5 05:17:21 localhost ansible-async_wrapper.py[134484]: Return async_wrapper task started. Oct 5 05:17:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18534 DF PROTO=TCP SPT=58268 DPT=9105 SEQ=1312522908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED752EF0000000001030307) Oct 5 05:17:22 localhost python3.9[134489]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Oct 5 05:17:22 localhost ansible-async_wrapper.py[134488]: Module complete (134488) Oct 5 05:17:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57742 DF PROTO=TCP SPT=43006 DPT=9882 SEQ=983672114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED75C9D0000000001030307) Oct 5 05:17:25 localhost python3.9[134581]: ansible-ansible.legacy.async_status Invoked with jid=j794212280854.134484 mode=status _async_dir=/root/.ansible_async Oct 5 05:17:26 localhost python3.9[134640]: ansible-ansible.legacy.async_status Invoked with jid=j794212280854.134484 mode=cleanup _async_dir=/root/.ansible_async Oct 5 05:17:26 localhost python3.9[134732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:26 localhost ansible-async_wrapper.py[134487]: Done in kid B. Oct 5 05:17:27 localhost python3.9[134805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655846.2462988-827-175752387651574/.source.returncode _original_basename=.j2m_x4pl follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:27 localhost python3.9[134897]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:28 localhost python3.9[134970]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655847.5046792-875-74867363226398/.source.cfg _original_basename=.wunoc51f follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57743 DF PROTO=TCP SPT=43006 DPT=9882 SEQ=983672114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED76C5E0000000001030307) Oct 5 05:17:29 localhost python3.9[135062]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:17:30 localhost systemd[1]: Reloading Network Manager... Oct 5 05:17:30 localhost NetworkManager[5981]: [1759655850.3470] audit: op="reload" arg="0" pid=135066 uid=0 result="success" Oct 5 05:17:30 localhost NetworkManager[5981]: [1759655850.3479] config: signal: SIGHUP (no changes from disk) Oct 5 05:17:30 localhost systemd[1]: Reloaded Network Manager. Oct 5 05:17:30 localhost systemd[1]: session-41.scope: Deactivated successfully. Oct 5 05:17:30 localhost systemd[1]: session-41.scope: Consumed 35.234s CPU time. Oct 5 05:17:30 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Oct 5 05:17:30 localhost systemd-logind[760]: Removed session 41. Oct 5 05:17:36 localhost sshd[135081]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:17:36 localhost systemd-logind[760]: New session 42 of user zuul. Oct 5 05:17:36 localhost systemd[1]: Started Session 42 of User zuul. Oct 5 05:17:37 localhost python3.9[135174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:17:38 localhost python3.9[135268]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:17:40 localhost python3.9[135421]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:17:41 localhost systemd[1]: session-42.scope: Deactivated successfully. Oct 5 05:17:41 localhost systemd[1]: session-42.scope: Consumed 2.279s CPU time. Oct 5 05:17:41 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Oct 5 05:17:41 localhost systemd-logind[760]: Removed session 42. Oct 5 05:17:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14020 DF PROTO=TCP SPT=53258 DPT=9102 SEQ=4229627068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7A65E0000000001030307) Oct 5 05:17:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14021 DF PROTO=TCP SPT=53258 DPT=9102 SEQ=4229627068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7AA5D0000000001030307) Oct 5 05:17:46 localhost sshd[135437]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:17:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14022 DF PROTO=TCP SPT=53258 DPT=9102 SEQ=4229627068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7B25D0000000001030307) Oct 5 05:17:46 localhost systemd-logind[760]: New session 43 of user zuul. Oct 5 05:17:46 localhost systemd[1]: Started Session 43 of User zuul. Oct 5 05:17:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18476 DF PROTO=TCP SPT=40516 DPT=9100 SEQ=3749150059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7B3ED0000000001030307) Oct 5 05:17:47 localhost python3.9[135530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18477 DF PROTO=TCP SPT=40516 DPT=9100 SEQ=3749150059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7B7DE0000000001030307) Oct 5 05:17:48 localhost sshd[135584]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:17:48 localhost python3.9[135626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:17:49 localhost python3.9[135722]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18478 DF PROTO=TCP SPT=40516 DPT=9100 SEQ=3749150059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7BFDD0000000001030307) Oct 5 05:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14023 DF PROTO=TCP SPT=53258 DPT=9102 SEQ=4229627068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7C21E0000000001030307) Oct 5 05:17:50 localhost python3.9[135776]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:17:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24467 DF PROTO=TCP SPT=57630 DPT=9105 SEQ=3926849568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7C81F0000000001030307) Oct 5 05:17:54 localhost python3.9[135870]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:17:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63248 DF PROTO=TCP SPT=52138 DPT=9882 SEQ=1255069264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7D1DD0000000001030307) Oct 5 05:17:55 localhost python3.9[136025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:56 localhost python3.9[136117]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:17:57 localhost python3.9[136221]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:57 localhost python3.9[136269]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:58 localhost python3.9[136361]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63249 DF PROTO=TCP SPT=52138 DPT=9882 SEQ=1255069264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED7E19E0000000001030307) Oct 5 05:17:58 localhost python3.9[136409]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:17:59 localhost python3.9[136501]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:00 localhost python3.9[136593]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:01 localhost python3.9[136685]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:01 localhost python3.9[136777]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:02 localhost python3.9[136869]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:18:06 localhost python3.9[136963]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:18:07 localhost python3.9[137057]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:18:08 localhost python3.9[137149]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:18:09 localhost python3.9[137241]: ansible-service_facts Invoked Oct 5 05:18:09 localhost network[137258]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:18:09 localhost network[137259]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:18:09 localhost network[137260]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:18:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52533 DF PROTO=TCP SPT=34076 DPT=9102 SEQ=1908483626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED81B8D0000000001030307) Oct 5 05:18:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52534 DF PROTO=TCP SPT=34076 DPT=9102 SEQ=1908483626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED81F9D0000000001030307) Oct 5 05:18:14 localhost python3.9[137710]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52535 DF PROTO=TCP SPT=34076 DPT=9102 SEQ=1908483626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8279D0000000001030307) Oct 5 05:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17589 DF PROTO=TCP SPT=44402 DPT=9100 SEQ=2168339849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8291D0000000001030307) Oct 5 05:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17590 DF PROTO=TCP SPT=44402 DPT=9100 SEQ=2168339849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED82D1E0000000001030307) Oct 5 05:18:19 localhost python3.9[137804]: ansible-package_facts Invoked with manager=['auto'] strategy=first Oct 5 05:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17591 DF PROTO=TCP SPT=44402 DPT=9100 SEQ=2168339849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8351D0000000001030307) Oct 5 05:18:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52536 DF PROTO=TCP SPT=34076 DPT=9102 SEQ=1908483626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8375D0000000001030307) Oct 5 05:18:20 localhost python3.9[137896]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:21 localhost python3.9[137971]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655900.3131769-620-229260886151242/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48223 DF PROTO=TCP SPT=55390 DPT=9105 SEQ=3393295429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED83D4F0000000001030307) Oct 5 05:18:22 localhost python3.9[138065]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:22 localhost python3.9[138140]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655901.813848-665-171761435269273/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3900 DF PROTO=TCP SPT=55744 DPT=9882 SEQ=1114481336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8471D0000000001030307) Oct 5 05:18:24 localhost python3.9[138234]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:26 localhost python3.9[138328]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:18:27 localhost python3.9[138382]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:18:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3901 DF PROTO=TCP SPT=55744 DPT=9882 SEQ=1114481336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED856DD0000000001030307) Oct 5 05:18:29 localhost python3.9[138476]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:18:30 localhost python3.9[138530]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:18:30 localhost chronyd[25884]: chronyd exiting Oct 5 05:18:30 localhost systemd[1]: Stopping NTP client/server... Oct 5 05:18:30 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 5 05:18:30 localhost systemd[1]: Stopped NTP client/server. Oct 5 05:18:30 localhost systemd[1]: Starting NTP client/server... Oct 5 05:18:30 localhost chronyd[138538]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 05:18:30 localhost chronyd[138538]: Frequency -26.645 +/- 0.182 ppm read from /var/lib/chrony/drift Oct 5 05:18:30 localhost chronyd[138538]: Loaded seccomp filter (level 2) Oct 5 05:18:30 localhost systemd[1]: Started NTP client/server. Oct 5 05:18:32 localhost systemd[1]: session-43.scope: Deactivated successfully. Oct 5 05:18:32 localhost systemd[1]: session-43.scope: Consumed 28.104s CPU time. Oct 5 05:18:32 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Oct 5 05:18:32 localhost systemd-logind[760]: Removed session 43. Oct 5 05:18:38 localhost sshd[138554]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:18:38 localhost systemd-logind[760]: New session 44 of user zuul. Oct 5 05:18:38 localhost systemd[1]: Started Session 44 of User zuul. Oct 5 05:18:39 localhost python3.9[138647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:18:41 localhost python3.9[138743]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:42 localhost python3.9[138848]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:42 localhost python3.9[138896]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.onxpbcfw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4230 DF PROTO=TCP SPT=44942 DPT=9102 SEQ=3558276813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED890BE0000000001030307) Oct 5 05:18:43 localhost python3.9[138988]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4231 DF PROTO=TCP SPT=44942 DPT=9102 SEQ=3558276813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED894DD0000000001030307) Oct 5 05:18:45 localhost python3.9[139063]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655922.9886868-143-828708928199/.source _original_basename=.c9uthw74 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:45 localhost python3.9[139155]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:46 localhost python3.9[139247]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:46 localhost auditd[726]: Audit daemon rotating log files Oct 5 05:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4232 DF PROTO=TCP SPT=44942 DPT=9102 SEQ=3558276813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED89CDD0000000001030307) Oct 5 05:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11044 DF PROTO=TCP SPT=41120 DPT=9100 SEQ=4108978339 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED89E4D0000000001030307) Oct 5 05:18:47 localhost python3.9[139320]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655925.979738-215-111936379247993/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:47 localhost python3.9[139412]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11045 DF PROTO=TCP SPT=41120 DPT=9100 SEQ=4108978339 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8A25D0000000001030307) Oct 5 05:18:48 localhost python3.9[139485]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655927.1654441-215-239646914916820/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:48 localhost python3.9[139577]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:49 localhost python3.9[139669]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11046 DF PROTO=TCP SPT=41120 DPT=9100 SEQ=4108978339 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8AA5D0000000001030307) Oct 5 05:18:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4233 DF PROTO=TCP SPT=44942 DPT=9102 SEQ=3558276813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8AC9D0000000001030307) Oct 5 05:18:51 localhost python3.9[139742]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655929.1590269-326-2414900148610/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:51 localhost python3.9[139834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12984 DF PROTO=TCP SPT=54656 DPT=9105 SEQ=3830991683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8B27F0000000001030307) Oct 5 05:18:52 localhost python3.9[139907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655931.1836824-371-43823095627355/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:53 localhost python3.9[139999]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:18:53 localhost systemd[1]: Reloading. Oct 5 05:18:53 localhost systemd-rc-local-generator[140021]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:18:53 localhost systemd-sysv-generator[140025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:18:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:54 localhost systemd[1]: Reloading. Oct 5 05:18:54 localhost systemd-rc-local-generator[140065]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:18:54 localhost systemd-sysv-generator[140069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:18:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:54 localhost systemd[1]: Starting EDPM Container Shutdown... Oct 5 05:18:54 localhost systemd[1]: Finished EDPM Container Shutdown. Oct 5 05:18:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38249 DF PROTO=TCP SPT=48426 DPT=9882 SEQ=3055760753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8BC5D0000000001030307) Oct 5 05:18:55 localhost python3.9[140168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:55 localhost python3.9[140241]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655934.521657-440-8242919432310/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:56 localhost python3.9[140333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:56 localhost python3.9[140406]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655935.7837605-485-87398932403825/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:57 localhost python3.9[140498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:18:57 localhost systemd[1]: Reloading. Oct 5 05:18:57 localhost systemd-rc-local-generator[140522]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:18:57 localhost systemd-sysv-generator[140529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:18:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:57 localhost systemd[1]: Starting Create netns directory... Oct 5 05:18:58 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:18:58 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:18:58 localhost systemd[1]: Finished Create netns directory. Oct 5 05:18:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38250 DF PROTO=TCP SPT=48426 DPT=9882 SEQ=3055760753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED8CC1D0000000001030307) Oct 5 05:18:58 localhost python3.9[140630]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:18:58 localhost network[140647]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:18:58 localhost network[140648]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:18:58 localhost network[140649]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:19:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:19:03 localhost python3.9[140850]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:04 localhost python3.9[140925]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655943.1483374-608-122450908385676/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:05 localhost python3.9[141016]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:19:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18878 DF PROTO=TCP SPT=40904 DPT=9102 SEQ=138923546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED905ED0000000001030307) Oct 5 05:19:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18879 DF PROTO=TCP SPT=40904 DPT=9102 SEQ=138923546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED909DD0000000001030307) Oct 5 05:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18880 DF PROTO=TCP SPT=40904 DPT=9102 SEQ=138923546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED911DE0000000001030307) Oct 5 05:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27845 DF PROTO=TCP SPT=50598 DPT=9100 SEQ=403070332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9137D0000000001030307) Oct 5 05:19:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27846 DF PROTO=TCP SPT=50598 DPT=9100 SEQ=403070332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9179D0000000001030307) Oct 5 05:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27847 DF PROTO=TCP SPT=50598 DPT=9100 SEQ=403070332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED91F9D0000000001030307) Oct 5 05:19:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18881 DF PROTO=TCP SPT=40904 DPT=9102 SEQ=138923546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9219D0000000001030307) Oct 5 05:19:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56273 DF PROTO=TCP SPT=44116 DPT=9105 SEQ=2071990636 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED927AF0000000001030307) Oct 5 05:19:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50325 DF PROTO=TCP SPT=53520 DPT=9882 SEQ=1434329414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9315D0000000001030307) Oct 5 05:19:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50326 DF PROTO=TCP SPT=53520 DPT=9882 SEQ=1434329414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9411D0000000001030307) Oct 5 05:19:30 localhost systemd[1]: session-44.scope: Deactivated successfully. Oct 5 05:19:30 localhost systemd[1]: session-44.scope: Consumed 14.343s CPU time. Oct 5 05:19:30 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Oct 5 05:19:30 localhost systemd-logind[760]: Removed session 44. Oct 5 05:19:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21952 DF PROTO=TCP SPT=40654 DPT=9102 SEQ=3309943412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED97B1D0000000001030307) Oct 5 05:19:43 localhost sshd[141122]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:19:43 localhost systemd-logind[760]: New session 45 of user zuul. Oct 5 05:19:43 localhost systemd[1]: Started Session 45 of User zuul. Oct 5 05:19:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21953 DF PROTO=TCP SPT=40654 DPT=9102 SEQ=3309943412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED97F1D0000000001030307) Oct 5 05:19:44 localhost python3.9[141215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:19:45 localhost python3.9[141311]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21954 DF PROTO=TCP SPT=40654 DPT=9102 SEQ=3309943412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9871D0000000001030307) Oct 5 05:19:46 localhost python3.9[141416]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48567 DF PROTO=TCP SPT=42722 DPT=9100 SEQ=4091610417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED988AD0000000001030307) Oct 5 05:19:47 localhost python3.9[141464]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.tz3vg2qp recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48568 DF PROTO=TCP SPT=42722 DPT=9100 SEQ=4091610417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED98C9D0000000001030307) Oct 5 05:19:48 localhost python3.9[141556]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:48 localhost python3.9[141604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.2yq2ltxj recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:49 localhost python3.9[141696]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:19:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48569 DF PROTO=TCP SPT=42722 DPT=9100 SEQ=4091610417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9949E0000000001030307) Oct 5 05:19:49 localhost python3.9[141788]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:50 localhost python3.9[141836]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:19:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21955 DF PROTO=TCP SPT=40654 DPT=9102 SEQ=3309943412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED996DD0000000001030307) Oct 5 05:19:51 localhost python3.9[141928]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:51 localhost python3.9[141976]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:19:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33776 DF PROTO=TCP SPT=36218 DPT=9105 SEQ=812073739 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED99CE10000000001030307) Oct 5 05:19:52 localhost python3.9[142068]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:52 localhost python3.9[142160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:53 localhost python3.9[142208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:53 localhost python3.9[142300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:54 localhost python3.9[142348]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51910 DF PROTO=TCP SPT=53144 DPT=9882 SEQ=2837583907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9A69D0000000001030307) Oct 5 05:19:55 localhost python3.9[142440]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:19:55 localhost systemd[1]: Reloading. Oct 5 05:19:55 localhost systemd-sysv-generator[142470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:19:55 localhost systemd-rc-local-generator[142463]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:19:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:19:56 localhost python3.9[142570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:57 localhost python3.9[142618]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:57 localhost python3.9[142710]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:58 localhost python3.9[142758]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51911 DF PROTO=TCP SPT=53144 DPT=9882 SEQ=2837583907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9B65D0000000001030307) Oct 5 05:19:59 localhost python3.9[142850]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:19:59 localhost systemd[1]: Reloading. Oct 5 05:19:59 localhost systemd-sysv-generator[142879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:19:59 localhost systemd-rc-local-generator[142873]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:19:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:19:59 localhost systemd[1]: Starting Create netns directory... Oct 5 05:19:59 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:19:59 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:19:59 localhost systemd[1]: Finished Create netns directory. Oct 5 05:20:00 localhost python3.9[142981]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:20:00 localhost network[142998]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:20:00 localhost network[142999]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:20:00 localhost network[143000]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:20:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:20:03 localhost python3.9[143201]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:04 localhost python3.9[143249]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:05 localhost python3.9[143341]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:05 localhost python3.9[143433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:06 localhost python3.9[143506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656005.1887994-608-158455368017438/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:07 localhost python3.9[143598]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Oct 5 05:20:07 localhost systemd[1]: Starting Time & Date Service... Oct 5 05:20:07 localhost systemd[1]: Started Time & Date Service. Oct 5 05:20:08 localhost python3.9[143694]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:08 localhost python3.9[143786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:09 localhost python3.9[143859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656008.420174-713-222887817140720/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:10 localhost python3.9[143951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:11 localhost python3.9[144024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656009.7066095-758-164103348533525/.source.yaml _original_basename=.k6qunxaj follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:12 localhost python3.9[144116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:12 localhost python3.9[144191]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656011.60564-803-161135880654008/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33063 DF PROTO=TCP SPT=42274 DPT=9102 SEQ=2246915834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9F04F0000000001030307) Oct 5 05:20:13 localhost python3.9[144313]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:14 localhost systemd[1]: tmp-crun.FVveUL.mount: Deactivated successfully. Oct 5 05:20:14 localhost podman[144458]: 2025-10-05 09:20:14.22948402 +0000 UTC m=+0.088783297 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:20:14 localhost podman[144458]: 2025-10-05 09:20:14.32906094 +0000 UTC m=+0.188360237 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.buildah.version=1.33.12, vcs-type=git, release=553, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, ceph=True, architecture=x86_64, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:20:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33064 DF PROTO=TCP SPT=42274 DPT=9102 SEQ=2246915834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9F45D0000000001030307) Oct 5 05:20:14 localhost python3.9[144496]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:15 localhost python3[144667]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:20:15 localhost python3.9[144790]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:16 localhost python3.9[144878]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656015.4482176-920-250012216751896/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36434 DF PROTO=TCP SPT=45302 DPT=9100 SEQ=3648704692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABED9FDDD0000000001030307) Oct 5 05:20:17 localhost python3.9[144970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:17 localhost python3.9[145043]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656016.7342124-965-130018666121864/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:18 localhost python3.9[145135]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:19 localhost python3.9[145208]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656017.994414-1010-1536043164601/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:19 localhost python3.9[145300]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:20 localhost python3.9[145373]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656019.1817439-1055-177820315171750/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13776 DF PROTO=TCP SPT=51682 DPT=9882 SEQ=3881479492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA0FD30000000001030307) Oct 5 05:20:21 localhost python3.9[145465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24978 DF PROTO=TCP SPT=54956 DPT=9105 SEQ=1772273560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA12130000000001030307) Oct 5 05:20:22 localhost python3.9[145538]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656020.9957485-1100-24255523832220/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:23 localhost python3.9[145630]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26197 DF PROTO=TCP SPT=46594 DPT=9101 SEQ=732407383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA164D0000000001030307) Oct 5 05:20:23 localhost python3.9[145722]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:24 localhost python3.9[145817]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:25 localhost python3.9[145910]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:25 localhost python3.9[146002]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:26 localhost python3.9[146094]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Oct 5 05:20:27 localhost python3.9[146187]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Oct 5 05:20:27 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Oct 5 05:20:27 localhost systemd[1]: session-45.scope: Deactivated successfully. Oct 5 05:20:27 localhost systemd[1]: session-45.scope: Consumed 28.206s CPU time. Oct 5 05:20:27 localhost systemd-logind[760]: Removed session 45. Oct 5 05:20:35 localhost sshd[146203]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:20:35 localhost systemd-logind[760]: New session 46 of user zuul. Oct 5 05:20:35 localhost systemd[1]: Started Session 46 of User zuul. Oct 5 05:20:36 localhost python3.9[146298]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Oct 5 05:20:37 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 05:20:37 localhost python3.9[146392]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:20:39 localhost python3.9[146486]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Oct 5 05:20:39 localhost chronyd[138538]: Selected source 54.39.23.64 (pool.ntp.org) Oct 5 05:20:40 localhost python3.9[146578]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.tf0v23kq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:41 localhost python3.9[146653]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.tf0v23kq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656040.1004333-189-138831993076243/.source.tf0v23kq _original_basename=.suoh4i7z follow=False checksum=a5b7abc70e8cdf8ce48ea3fad60c0d7fc823809c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61867 DF PROTO=TCP SPT=50186 DPT=9102 SEQ=4129694733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA657E0000000001030307) Oct 5 05:20:45 localhost python3.9[146745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:20:46 localhost python3.9[146837]: ansible-ansible.builtin.blockinfile Invoked with block=np0005471148.localdomain,192.168.122.105,np0005471148* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCav0eZ81SP1lgxNKp8kzS2MGddVZXD3CnfZarlQErB75DRL4T/NvcVXnfxKn4UPX+h1zwIlKhrD0kHzKTVqifYPUqAmLb8rYREMTmXhQxto2b7VGPMQJtDAprHqyUEFlSdV8NbN3SVctntX/mSKO9bD06JFfa3F62ItPVHy6SnAKMzgNdSszOdKFvbEzC2oxcehr1uB2BAOIiTb1KxyTjXhvXZSYUsBxiGWPOP83oZQxCJlh/VjIUu6P2F6+mv1415n4ujbEujO8/iVbBF1uy28bTobQfABbfPNDNUCd9Gr+xDlT4JuuYTcjqG+gr3yvctzwj/+lxYcJbC0ZYtRhJ0pu8gjm44UFVFCpPxwPpvkKV5n+jU3uaSX98EZpaTlK51qqfwX29LxmMKs3pezfixQ67KCoq1jcDNXUiZpX9svKFD2Drlx+6s9pBkQGZcsmVNiCKQBJmrpFCgYhAPOEIjAGPkic0qp+pAaJtQpB/gYfF/cNCJmCm80s5s/jRuSOs=#012np0005471148.localdomain,192.168.122.105,np0005471148* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAp7Wif6DpMQKTwU3PubEUDmFwUOeZnS+fubLkMUqCdL#012np0005471148.localdomain,192.168.122.105,np0005471148* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+Jh3lVRxMbXFkgqshiJoCO9Ej1k6b9l13ZcaXQzdlR/Wufer1byxTOnOxRYkvLgFnjgmViKWAnlhwFgjslN0E=#012np0005471152.localdomain,192.168.122.108,np0005471152* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQL9bjzo5YAISp2Bxwtb4g1hALXPqelm3WBGwGfh3/tRyDvnxqpgAH4BkgnyM92vRVDUZgylBjfJ54aevQzR0sxDWI5un2tTEepezxrrMvJNDvOss/fCLi88oah/o3qw++j3XWh7zZNBR2ZlXoM/pIxbee1SynEGOX2B0csXrd1qrshg6L4eHx3xP0RwAulzm5seEcMLqx8KH2dq77wY0VqQkpaFyFb7FqX77rxq/UKPpgE0srhO8SRvE9De5pNe/qOciIyF6dgzu5EyyHu7KYjTILbMKxDa32WE/P2Rf7vIscc9uCS7JGMjSz6NeeFnpRpsv8N/pMUGyuUGsD1ZchAk2FVF+E5cZtF04URyBXHR3aMjxItV46eMTahkYu0ieB5XIe1ht+1mpTNW5HuK+c5IGVa1+5Y3udf7NKVNLxbJKJpiyb1+mVhhrwPzJFaIuMT3y2IHiF3xGDIof8BMBzvhUW/T0WYISPRdb3hpP5yODYfEz7Mmnpe6mZj+mFVVc=#012np0005471152.localdomain,192.168.122.108,np0005471152* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP3x5SckWWWGd79jap3Mvs5wH/QoloMzzJMibApRFTOH#012np0005471152.localdomain,192.168.122.108,np0005471152* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2IZNaNg1HrZ8uBp5hH2F2fftZpwxpN/FAZW1FDmDJnG3zQL7JXSnOySV+EzgCTEq8YFKz+6pYQVjbNBVcMyHY=#012np0005471147.localdomain,192.168.122.104,np0005471147* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz7dSoZhAVsu7Q6pQ5T0a3vdxjM8VsWq083YCwmW5ZBuWxtpO+ywiBUZXF2GXQh83uhFPjTL6AVFeIX5lNLPi70M1qL6Twe/O2mk2gSzlx225JQnN98IGNIaiWFoDWJeh+QC5ahKjsZLMqt7JQaJMEu8Y+pNNhDzn+mrA5SQL/4KeoVuUMVnHW606U26xi/2P8WkxBdjPuLtDQdFdmprrS1/lNbxCAMj0MhrqsxbpX9uLe04KqrNXmsaTlvu+XKlf2y7mxaihY81Qbyf86Guw2DS8EIhDZjC2olPxoqJJn5ZAGtvtc/FzkH/pbbMy1CbD6OnTFGsUHbZKS9eBF7PtpLp3YiUp/FyRfiyxmtelUycYx7bqdixnmEGj4O2Ju2ehdpxO1RyBRyrfUelVA8bfBft6yd41RwKwujj5OtnOXzqb7I8O83ZgbDm6oUjTG+59hElsoR3PI5ow3C3NTrDQxwesLfuTjCrjHCWnvKIQb51xqtNRDT8PTStx27/FxOJ0=#012np0005471147.localdomain,192.168.122.104,np0005471147* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK2i2wPoxrCiKAfRIrzXmTAp8OTrj2YwZHMGqK46Nz23#012np0005471147.localdomain,192.168.122.104,np0005471147* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEylXfK9QkjmsDlz9cP3sZHSxfYmmFZ1i6DugCmJUagRpornJXqftjM+iDp79cZs676yn/qZCEtj0wsqsiaQjLA=#012np0005471151.localdomain,192.168.122.107,np0005471151* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeDNxXs+ZUIP9/a2zVFllXGXsP2/RtUXLMLDP4YL71gvVrRf+MpnYrvCNPSMtaio8hFnrpiDFXxbT/vT8cGaq0VtYxjMm6ggMMEpJTsx2xG5zkDW3nbKnfBWdlrf2h3+WUBHOB9mofrB5CT0cuNDshy8Zq3cPyqMZVPdJXPIH+fsWD+b65aHwAk93ThJehxt/nPEDADcRKHLYFTlAyvnZ5aEvqj714SQIjwLcSkgaTfu3JmjF9FllzZz3DKBld7fRbggrz2rkww5yxrvj9W/KsoSugYq1N+fEEWdUonP/PYnRfJ9Qe+OMV5TmEEYuUOqPqaVs8vMZI4zYb3l5asdknHsN0N3URQbZANs9Fettfh3uoOPlyegvPjIMukQ8KZAy+KQWSAzho7RnR5ULuWVNi7Rj9mFC01wy0778Zqb7BlWc+Yn3kNXEkR9u1vQjBq7B+Ie922b6pYARzXmaE2yjzI7QdYo1IB/o9UIP/zEfugki+28qB0215MGXrk3EqTk8=#012np0005471151.localdomain,192.168.122.107,np0005471151* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOPYp4E4CPb8OeaXcuCXzvWlnLbzMphE36OLWOqzbsk9#012np0005471151.localdomain,192.168.122.107,np0005471151* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVed2EDqr9esw80ElZbLpRPK5ioAVBRkpsLKO1S/aN8MVh1BSM2slQbIv+QbUY3Qu3prAQuxkBFoKvxbciSRgQ=#012np0005471146.localdomain,192.168.122.103,np0005471146* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB7OvQtGFS2ddbuT67PLzOZMMKExXKgLGlJbGmtwnZie42R//csfGTuDcY5sTL5gAKr5LgWtvuSJPxC5H8l1UXw+Jr1ot425wmg47AIcheuJNQqzQ7tPAGH3PICnVC6aPHAOVRVF+gH7UOtvdgmSE7iMATMRPcUy2tqR8NCuKKvzDeS/2RQXJpgWok3C9RwXiVS5oUv9jUyevFtgntUOYojmdQgQKC7AwBkYfT7TF3CJZYryU/VVFtwd7a/UiSCw5QLoTN8NxCyROZfFtmylvUybp8RdUroQiriJw1zcQyVLsXbwq0clpb5hc+/3tQLZv3a6JrVpp5DZq+MW98UkErXy11sX4Mk9e2seewM0xMkdGzMReNlZqtUWLIISbhxkBby9gn3WRKG32HdCCSD66ZhNAfOCfpaO3dNiCRUyzYoh4WRF7pu7nwBQ/eTQp8SGptdGGHUf0XF9tqRWjj2nrVrHHOnbj/9clk9VdTU6dbcxFoz3X5SWbovR40rDPz6e0=#012np0005471146.localdomain,192.168.122.103,np0005471146* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICmMQkOJE522ttIEI6FiMBU6NgTQz2to1syfYlA1Memo#012np0005471146.localdomain,192.168.122.103,np0005471146* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEeJqmNJdbqm27rXqmy1Bcaw9svoUWZ+mqG5yOvqgawLTVR507UPdDgYoX7XGWbb81SzubbZqbU2YQpLzpWeEs4=#012np0005471150.localdomain,192.168.122.106,np0005471150* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCT5ftkzxR2Qyrkv4Bog+udHavLt9s9Di0AWsGW2RuyQQiM22RbERlEwcEpl46d2UZEA/h4vz9TbE4fxIRY43XsuoO7kScaRsaDEk80scoEanpXJXpL99y+HtDr7IiFnp920RFZWAvClhPuG5f4GTZcAH8JwlQdHLoU08owfBRpfZmDNZcoyX0tprcWQCD7KMlzpxwZFqhjkJVPrnq3lxWA9cG87b9CDA6sHuH8h4RYjBBtCOkxgTVQgBjGVWWjO64RQXgkKPObBX3sBjTYorcuu5af6cl8pwRuWCIDiskwHVqEvsdx7nXa+8le2b250IQoHti8LislYbkhX/LUO0TmKGbvUuzaK3gsuRGLxf+qG4UdCa7CYecLosB0sg0pv7c95e80sFtLwEFyKvUkMfEdbFIxMr03gd1i6lSeafCtY9Xk0sjkbJpMGaj2hsNlv1S6X8taFEHFuQyDEZ3ZkQXwxYkb0pqUef9Fn6d2VvlP4u7GHH+iQZtgv7NZrxvZOos=#012np0005471150.localdomain,192.168.122.106,np0005471150* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgKPJEV6wknnlU6vzKKYTIianKfcvSA46+IMP/yOIqt#012np0005471150.localdomain,192.168.122.106,np0005471150* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIaPYSDU/QOQ7ZadGCJmFA1TBpNbjPtGfciDHN2J4omWnXscBiFsDT0ajtGp7PFBlY4x2ml2I4zPhENaESWoYNQ=#012 create=True mode=0644 path=/tmp/ansible.tf0v23kq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10855 DF PROTO=TCP SPT=44486 DPT=9100 SEQ=4122573595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA730D0000000001030307) Oct 5 05:20:48 localhost python3.9[146929]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tf0v23kq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:49 localhost python3.9[147023]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tf0v23kq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:50 localhost systemd[1]: session-46.scope: Deactivated successfully. Oct 5 05:20:50 localhost systemd[1]: session-46.scope: Consumed 4.404s CPU time. Oct 5 05:20:50 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Oct 5 05:20:50 localhost systemd-logind[760]: Removed session 46. Oct 5 05:20:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54810 DF PROTO=TCP SPT=52650 DPT=9882 SEQ=1870407026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA85030000000001030307) Oct 5 05:20:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2043 DF PROTO=TCP SPT=36546 DPT=9105 SEQ=4219333226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA873F0000000001030307) Oct 5 05:20:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64374 DF PROTO=TCP SPT=48734 DPT=9101 SEQ=1684763320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDA8B7E0000000001030307) Oct 5 05:20:56 localhost sshd[147038]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:20:56 localhost systemd-logind[760]: New session 47 of user zuul. Oct 5 05:20:56 localhost systemd[1]: Started Session 47 of User zuul. Oct 5 05:20:57 localhost python3.9[147131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:20:58 localhost python3.9[147227]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 05:20:59 localhost python3.9[147321]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:21:01 localhost python3.9[147414]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:21:02 localhost python3.9[147507]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:21:02 localhost python3.9[147601]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:21:03 localhost python3.9[147696]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:03 localhost systemd[1]: session-47.scope: Deactivated successfully. Oct 5 05:21:03 localhost systemd[1]: session-47.scope: Consumed 4.029s CPU time. Oct 5 05:21:03 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Oct 5 05:21:04 localhost systemd-logind[760]: Removed session 47. Oct 5 05:21:09 localhost sshd[147711]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:21:09 localhost systemd-logind[760]: New session 48 of user zuul. Oct 5 05:21:09 localhost systemd[1]: Started Session 48 of User zuul. Oct 5 05:21:10 localhost python3.9[147804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:21:12 localhost python3.9[147900]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:21:13 localhost python3.9[147954]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8338 DF PROTO=TCP SPT=46846 DPT=9102 SEQ=938583326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDADAAD0000000001030307) Oct 5 05:21:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8339 DF PROTO=TCP SPT=46846 DPT=9102 SEQ=938583326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDADE9D0000000001030307) Oct 5 05:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8340 DF PROTO=TCP SPT=46846 DPT=9102 SEQ=938583326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAE69D0000000001030307) Oct 5 05:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50578 DF PROTO=TCP SPT=58718 DPT=9100 SEQ=2874495457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAE83C0000000001030307) Oct 5 05:21:17 localhost python3.9[148107]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50579 DF PROTO=TCP SPT=58718 DPT=9100 SEQ=2874495457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAEC5D0000000001030307) Oct 5 05:21:19 localhost python3.9[148215]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:19 localhost python3.9[148307]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50580 DF PROTO=TCP SPT=58718 DPT=9100 SEQ=2874495457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAF45E0000000001030307) Oct 5 05:21:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8341 DF PROTO=TCP SPT=46846 DPT=9102 SEQ=938583326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAF65D0000000001030307) Oct 5 05:21:20 localhost python3.9[148399]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9590 DF PROTO=TCP SPT=39394 DPT=9882 SEQ=4271188093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAFA330000000001030307) Oct 5 05:21:21 localhost python3.9[148489]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:21:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31458 DF PROTO=TCP SPT=47652 DPT=9105 SEQ=31385274 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAFC6F0000000001030307) Oct 5 05:21:22 localhost python3.9[148579]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:21:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9591 DF PROTO=TCP SPT=39394 DPT=9882 SEQ=4271188093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDAFE1E0000000001030307) Oct 5 05:21:22 localhost python3.9[148671]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31459 DF PROTO=TCP SPT=47652 DPT=9105 SEQ=31385274 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB005E0000000001030307) Oct 5 05:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50553 DF PROTO=TCP SPT=47400 DPT=9101 SEQ=739899430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB00AE0000000001030307) Oct 5 05:21:23 localhost systemd[1]: session-48.scope: Deactivated successfully. Oct 5 05:21:23 localhost systemd[1]: session-48.scope: Consumed 8.920s CPU time. Oct 5 05:21:23 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Oct 5 05:21:23 localhost systemd-logind[760]: Removed session 48. Oct 5 05:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50581 DF PROTO=TCP SPT=58718 DPT=9100 SEQ=2874495457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB041D0000000001030307) Oct 5 05:21:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50554 DF PROTO=TCP SPT=47400 DPT=9101 SEQ=739899430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB049D0000000001030307) Oct 5 05:21:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9592 DF PROTO=TCP SPT=39394 DPT=9882 SEQ=4271188093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB061D0000000001030307) Oct 5 05:21:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31460 DF PROTO=TCP SPT=47652 DPT=9105 SEQ=31385274 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB085D0000000001030307) Oct 5 05:21:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50555 DF PROTO=TCP SPT=47400 DPT=9101 SEQ=739899430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB0C9D0000000001030307) Oct 5 05:21:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9593 DF PROTO=TCP SPT=39394 DPT=9882 SEQ=4271188093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB15DD0000000001030307) Oct 5 05:21:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31461 DF PROTO=TCP SPT=47652 DPT=9105 SEQ=31385274 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB181D0000000001030307) Oct 5 05:21:30 localhost sshd[148686]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:21:30 localhost systemd-logind[760]: New session 49 of user zuul. Oct 5 05:21:30 localhost systemd[1]: Started Session 49 of User zuul. Oct 5 05:21:31 localhost python3.9[148779]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:21:33 localhost python3.9[148875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:34 localhost python3.9[148967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:35 localhost python3.9[149040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656093.9279563-185-44688378068413/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:35 localhost python3.9[149132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:36 localhost python3.9[149224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:37 localhost python3.9[149297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656096.0490422-257-91663316868135/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:38 localhost python3.9[149389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:39 localhost python3.9[149481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:40 localhost python3.9[149554]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656098.55573-329-15789014565071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:40 localhost python3.9[149646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:41 localhost python3.9[149738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:41 localhost python3.9[149811]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656100.8448176-400-244317745270232/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:42 localhost python3.9[149903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:43 localhost python3.9[149995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15455 DF PROTO=TCP SPT=55584 DPT=9102 SEQ=3608949821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB4FDE0000000001030307) Oct 5 05:21:43 localhost python3.9[150068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656102.7171426-472-246817411367248/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15456 DF PROTO=TCP SPT=55584 DPT=9102 SEQ=3608949821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB53DD0000000001030307) Oct 5 05:21:44 localhost python3.9[150161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:45 localhost python3.9[150253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:45 localhost python3.9[150326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656104.6427584-544-132589251720098/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:46 localhost python3.9[150418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15457 DF PROTO=TCP SPT=55584 DPT=9102 SEQ=3608949821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB5BDD0000000001030307) Oct 5 05:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58713 DF PROTO=TCP SPT=39096 DPT=9100 SEQ=2200534914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB5D6D0000000001030307) Oct 5 05:21:47 localhost python3.9[150510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:47 localhost python3.9[150583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656106.5759919-616-51153569089530/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58714 DF PROTO=TCP SPT=39096 DPT=9100 SEQ=2200534914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB615D0000000001030307) Oct 5 05:21:48 localhost python3.9[150675]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:49 localhost python3.9[150767]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:49 localhost python3.9[150840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656108.5733845-694-215115713070236/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58715 DF PROTO=TCP SPT=39096 DPT=9100 SEQ=2200534914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB695E0000000001030307) Oct 5 05:21:49 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Oct 5 05:21:49 localhost systemd[1]: session-49.scope: Deactivated successfully. Oct 5 05:21:49 localhost systemd[1]: session-49.scope: Consumed 12.055s CPU time. Oct 5 05:21:49 localhost systemd-logind[760]: Removed session 49. Oct 5 05:21:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15458 DF PROTO=TCP SPT=55584 DPT=9102 SEQ=3608949821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB6B9D0000000001030307) Oct 5 05:21:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12662 DF PROTO=TCP SPT=34458 DPT=9105 SEQ=726426847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB71A00000000001030307) Oct 5 05:21:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26431 DF PROTO=TCP SPT=54402 DPT=9882 SEQ=969224314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB7B5D0000000001030307) Oct 5 05:21:56 localhost sshd[150856]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:21:56 localhost systemd-logind[760]: New session 50 of user zuul. Oct 5 05:21:56 localhost systemd[1]: Started Session 50 of User zuul. Oct 5 05:21:57 localhost python3.9[150951]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26432 DF PROTO=TCP SPT=54402 DPT=9882 SEQ=969224314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDB8B1D0000000001030307) Oct 5 05:21:58 localhost python3.9[151043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:59 localhost python3.9[151116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656118.0403686-62-8980783904355/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d68e0db228a7d8458c08a66635a19e112f8e9d34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:00 localhost python3.9[151208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:00 localhost python3.9[151281]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656119.5496745-62-10410636501884/.source.conf _original_basename=ceph.conf follow=False checksum=9ed326307220aa83db0d8ce552ee8014f398d5df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:00 localhost systemd[1]: session-50.scope: Deactivated successfully. Oct 5 05:22:00 localhost systemd[1]: session-50.scope: Consumed 2.340s CPU time. Oct 5 05:22:00 localhost systemd-logind[760]: Session 50 logged out. Waiting for processes to exit. Oct 5 05:22:00 localhost systemd-logind[760]: Removed session 50. Oct 5 05:22:06 localhost sshd[151296]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:22:07 localhost sshd[151298]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:22:07 localhost systemd-logind[760]: New session 51 of user zuul. Oct 5 05:22:07 localhost systemd[1]: Started Session 51 of User zuul. Oct 5 05:22:08 localhost python3.9[151391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:22:09 localhost python3.9[151487]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:10 localhost python3.9[151579]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:11 localhost python3.9[151669]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:22:11 localhost python3.9[151761]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 5 05:22:13 localhost python3.9[151853]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:22:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8213 DF PROTO=TCP SPT=52728 DPT=9102 SEQ=1882133323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBC50D0000000001030307) Oct 5 05:22:14 localhost python3.9[151907]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:22:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8214 DF PROTO=TCP SPT=52728 DPT=9102 SEQ=1882133323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBC91D0000000001030307) Oct 5 05:22:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8215 DF PROTO=TCP SPT=52728 DPT=9102 SEQ=1882133323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBD11D0000000001030307) Oct 5 05:22:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62263 DF PROTO=TCP SPT=44046 DPT=9100 SEQ=4143485349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBD29C0000000001030307) Oct 5 05:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62264 DF PROTO=TCP SPT=44046 DPT=9100 SEQ=4143485349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBD69E0000000001030307) Oct 5 05:22:19 localhost python3.9[152062]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:22:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62265 DF PROTO=TCP SPT=44046 DPT=9100 SEQ=4143485349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBDE9E0000000001030307) Oct 5 05:22:19 localhost python3[152172]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Oct 5 05:22:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8216 DF PROTO=TCP SPT=52728 DPT=9102 SEQ=1882133323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBE0DE0000000001030307) Oct 5 05:22:21 localhost python3.9[152264]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:21 localhost python3.9[152356]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53128 DF PROTO=TCP SPT=45090 DPT=9105 SEQ=333070909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBE6CF0000000001030307) Oct 5 05:22:22 localhost python3.9[152404]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:23 localhost python3.9[152496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:23 localhost python3.9[152544]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qhuzxkfu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:24 localhost python3.9[152636]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11960 DF PROTO=TCP SPT=45734 DPT=9882 SEQ=4250669564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDBF09D0000000001030307) Oct 5 05:22:24 localhost python3.9[152684]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:25 localhost python3.9[152776]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:26 localhost python3[152869]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:22:27 localhost python3.9[152961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:27 localhost python3.9[153036]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656146.5803819-431-238782270066279/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:28 localhost python3.9[153128]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11961 DF PROTO=TCP SPT=45734 DPT=9882 SEQ=4250669564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC005E0000000001030307) Oct 5 05:22:28 localhost python3.9[153203]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656147.9073758-476-195707019598814/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:30 localhost python3.9[153295]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:30 localhost python3.9[153370]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656149.7026997-521-31366044260647/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:32 localhost python3.9[153462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:32 localhost python3.9[153537]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656151.5775044-566-11633926434543/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:33 localhost python3.9[153629]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:33 localhost python3.9[153704]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656152.729925-611-57050244324712/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:34 localhost python3.9[153796]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:35 localhost python3.9[153888]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:35 localhost python3.9[153983]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:36 localhost python3.9[154075]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:37 localhost python3.9[154168]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:22:37 localhost python3.9[154262]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:38 localhost python3.9[154358]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:39 localhost python3.9[154448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:22:41 localhost python3.9[154541]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005471150.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:85:5b:92:b0" external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:41 localhost ovs-vsctl[154542]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005471150.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:85:5b:92:b0 external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Oct 5 05:22:42 localhost python3.9[154634]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=955 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=756522737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC3A3E0000000001030307) Oct 5 05:22:43 localhost python3.9[154727]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:22:44 localhost python3.9[154821]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=956 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=756522737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC3E5E0000000001030307) Oct 5 05:22:44 localhost python3.9[154913]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:45 localhost python3.9[154961]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:45 localhost python3.9[155053]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:46 localhost python3.9[155101]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=957 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=756522737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC465D0000000001030307) Oct 5 05:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1265 DF PROTO=TCP SPT=37424 DPT=9100 SEQ=3614970280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC47CD0000000001030307) Oct 5 05:22:47 localhost python3.9[155193]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:47 localhost python3.9[155285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1266 DF PROTO=TCP SPT=37424 DPT=9100 SEQ=3614970280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC4BDD0000000001030307) Oct 5 05:22:48 localhost python3.9[155333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:48 localhost python3.9[155425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:48 localhost sshd[155445]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:22:49 localhost python3.9[155475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1267 DF PROTO=TCP SPT=37424 DPT=9100 SEQ=3614970280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC53DD0000000001030307) Oct 5 05:22:50 localhost python3.9[155567]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:22:50 localhost systemd[1]: Reloading. Oct 5 05:22:50 localhost systemd-sysv-generator[155596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:22:50 localhost systemd-rc-local-generator[155591]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:22:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:22:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=958 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=756522737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC561D0000000001030307) Oct 5 05:22:51 localhost python3.9[155696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33419 DF PROTO=TCP SPT=33288 DPT=9105 SEQ=1244700985 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC5BFF0000000001030307) Oct 5 05:22:52 localhost python3.9[155744]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:53 localhost python3.9[155836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:53 localhost python3.9[155884]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:54 localhost python3.9[155976]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:22:54 localhost systemd[1]: Reloading. Oct 5 05:22:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43712 DF PROTO=TCP SPT=59090 DPT=9882 SEQ=1417885931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC65DD0000000001030307) Oct 5 05:22:54 localhost systemd-rc-local-generator[155999]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:22:54 localhost systemd-sysv-generator[156002]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:22:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:22:54 localhost systemd[1]: Starting Create netns directory... Oct 5 05:22:54 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:22:54 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:22:54 localhost systemd[1]: Finished Create netns directory. Oct 5 05:22:55 localhost python3.9[156111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:56 localhost python3.9[156203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:56 localhost python3.9[156276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656175.874427-1343-102842973088986/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:57 localhost python3.9[156368]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:58 localhost python3.9[156460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43713 DF PROTO=TCP SPT=59090 DPT=9882 SEQ=1417885931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDC759E0000000001030307) Oct 5 05:22:59 localhost python3.9[156535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656178.055432-1418-22088950951228/.source.json _original_basename=.o7bvh63o follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:59 localhost python3.9[156627]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:02 localhost python3.9[156884]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Oct 5 05:23:04 localhost python3.9[156976]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:23:04 localhost python3.9[157068]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:23:08 localhost python3[157187]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:23:09 localhost python3[157187]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "55a900d9f0d3284e9f7b4ec31d42a516ca3b16bc0ce186b6223860f9b9ee7269",#012 "Digest": "sha256:32b3cf3043ae552a67b716cf04bf0bdb981e8077ccb2893336edcc36bfd3946d",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:32b3cf3043ae552a67b716cf04bf0bdb981e8077ccb2893336edcc36bfd3946d"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:40:17.17546349Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345642952,#012 "VirtualSize": 345642952,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/60afe3546a98a201263be776cccb4442ad15a631184295cbccd8c923b430a1f8/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:5ff34b53abd092090c68bcc95bc461f0d3ee7243562df6154491ba8d09607eec",#012 "sha256:0b25eff48e4a51bccec814322a9b10589b6ba63d76de0828aaf9fdfd4dfb16c0"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Oct 5 05:23:09 localhost podman[157239]: 2025-10-05 09:23:09.326296592 +0000 UTC m=+0.087108373 container remove 14e80b55e18d201b4b67a21ea4db1bd9c969a2867f406a850e46de1a6b7a81bc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:23:09 localhost python3[157187]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Oct 5 05:23:09 localhost podman[157252]: Oct 5 05:23:09 localhost podman[157252]: 2025-10-05 09:23:09.428197341 +0000 UTC m=+0.083974462 container create 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 05:23:09 localhost podman[157252]: 2025-10-05 09:23:09.388800191 +0000 UTC m=+0.044577302 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 5 05:23:09 localhost python3[157187]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 5 05:23:10 localhost python3.9[157380]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:23:11 localhost python3.9[157474]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:12 localhost python3.9[157520]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:23:12 localhost python3.9[157611]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656192.0693955-1682-112620683353296/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:13 localhost python3.9[157657]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:23:13 localhost systemd[1]: Reloading. Oct 5 05:23:13 localhost systemd-rc-local-generator[157678]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2050 DF PROTO=TCP SPT=50502 DPT=9102 SEQ=194708935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCAF6E0000000001030307) Oct 5 05:23:13 localhost systemd-sysv-generator[157683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:14 localhost python3.9[157739]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:23:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2051 DF PROTO=TCP SPT=50502 DPT=9102 SEQ=194708935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCB35E0000000001030307) Oct 5 05:23:15 localhost systemd[1]: Reloading. Oct 5 05:23:15 localhost systemd-sysv-generator[157769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:15 localhost systemd-rc-local-generator[157764]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:15 localhost systemd[1]: Starting ovn_controller container... Oct 5 05:23:15 localhost systemd[1]: tmp-crun.mQAyKo.mount: Deactivated successfully. Oct 5 05:23:15 localhost systemd[1]: Started libcrun container. Oct 5 05:23:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a78d1a613929daaa0762209ae29100c62a9bbdfc8bf62333bb8b7ab6eb802f4/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 05:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:23:15 localhost podman[157781]: 2025-10-05 09:23:15.928313284 +0000 UTC m=+0.151841688 container init 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:23:15 localhost ovn_controller[157794]: + sudo -E kolla_set_configs Oct 5 05:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:23:15 localhost podman[157781]: 2025-10-05 09:23:15.970949328 +0000 UTC m=+0.194477712 container start 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:23:15 localhost edpm-start-podman-container[157781]: ovn_controller Oct 5 05:23:15 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 05:23:15 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 05:23:15 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 05:23:16 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 05:23:16 localhost podman[157802]: 2025-10-05 09:23:16.06257162 +0000 UTC m=+0.088385529 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:23:16 localhost edpm-start-podman-container[157780]: Creating additional drop-in dependency for "ovn_controller" (1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222) Oct 5 05:23:16 localhost podman[157802]: 2025-10-05 09:23:16.141578998 +0000 UTC m=+0.167392947 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:23:16 localhost podman[157802]: unhealthy Oct 5 05:23:16 localhost systemd[1]: Reloading. Oct 5 05:23:16 localhost systemd[157822]: Queued start job for default target Main User Target. Oct 5 05:23:16 localhost systemd[157822]: Created slice User Application Slice. Oct 5 05:23:16 localhost systemd[157822]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 05:23:16 localhost systemd[157822]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:23:16 localhost systemd[157822]: Reached target Paths. Oct 5 05:23:16 localhost systemd[157822]: Reached target Timers. Oct 5 05:23:16 localhost systemd[157822]: Starting D-Bus User Message Bus Socket... Oct 5 05:23:16 localhost systemd[157822]: Starting Create User's Volatile Files and Directories... Oct 5 05:23:16 localhost systemd[157822]: Listening on D-Bus User Message Bus Socket. Oct 5 05:23:16 localhost systemd[157822]: Reached target Sockets. Oct 5 05:23:16 localhost systemd[157822]: Finished Create User's Volatile Files and Directories. Oct 5 05:23:16 localhost systemd[157822]: Reached target Basic System. Oct 5 05:23:16 localhost systemd[157822]: Reached target Main User Target. Oct 5 05:23:16 localhost systemd[157822]: Startup finished in 120ms. Oct 5 05:23:16 localhost systemd-rc-local-generator[157879]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:16 localhost systemd-sysv-generator[157882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2052 DF PROTO=TCP SPT=50502 DPT=9102 SEQ=194708935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCBB5D0000000001030307) Oct 5 05:23:16 localhost systemd[1]: Started User Manager for UID 0. Oct 5 05:23:16 localhost systemd[1]: Started ovn_controller container. Oct 5 05:23:16 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:23:16 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Failed with result 'exit-code'. Oct 5 05:23:16 localhost systemd[1]: Started Session c12 of User root. Oct 5 05:23:16 localhost ovn_controller[157794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:23:16 localhost ovn_controller[157794]: INFO:__main__:Validating config file Oct 5 05:23:16 localhost ovn_controller[157794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:23:16 localhost ovn_controller[157794]: INFO:__main__:Writing out command to execute Oct 5 05:23:16 localhost systemd[1]: session-c12.scope: Deactivated successfully. Oct 5 05:23:16 localhost ovn_controller[157794]: ++ cat /run_command Oct 5 05:23:16 localhost ovn_controller[157794]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Oct 5 05:23:16 localhost ovn_controller[157794]: + ARGS= Oct 5 05:23:16 localhost ovn_controller[157794]: + sudo kolla_copy_cacerts Oct 5 05:23:16 localhost systemd[1]: Started Session c13 of User root. Oct 5 05:23:16 localhost systemd[1]: session-c13.scope: Deactivated successfully. Oct 5 05:23:16 localhost ovn_controller[157794]: + [[ ! -n '' ]] Oct 5 05:23:16 localhost ovn_controller[157794]: + . kolla_extend_start Oct 5 05:23:16 localhost ovn_controller[157794]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Oct 5 05:23:16 localhost ovn_controller[157794]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Oct 5 05:23:16 localhost ovn_controller[157794]: + umask 0022 Oct 5 05:23:16 localhost ovn_controller[157794]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8] Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00004|main|INFO|OVS IDL reconnected, force recompute. Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00013|main|INFO|OVS feature set changed, force recompute. Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00021|main|INFO|OVS feature set changed, force recompute. Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00022|ovn_bfd|INFO|Disabled BFD on interface ovn-fe3fe5-0 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00023|ovn_bfd|INFO|Disabled BFD on interface ovn-891f35-0 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00024|ovn_bfd|INFO|Disabled BFD on interface ovn-85ea67-0 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00025|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00026|binding|INFO|Claiming lport 4db5c636-3094-4e86-9093-8123489e64be for this chassis. Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00027|binding|INFO|4db5c636-3094-4e86-9093-8123489e64be: Claiming fa:16:3e:a6:2c:a3 192.168.0.56 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00028|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00029|binding|INFO|Removing lport 4db5c636-3094-4e86-9093-8123489e64be ovn-installed in OVS Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00030|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00031|ovn_bfd|INFO|Enabled BFD on interface ovn-fe3fe5-0 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00032|ovn_bfd|INFO|Enabled BFD on interface ovn-891f35-0 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00033|ovn_bfd|INFO|Enabled BFD on interface ovn-85ea67-0 Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00034|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00035|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00036|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:16 localhost ovn_controller[157794]: 2025-10-05T09:23:16Z|00037|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28134 DF PROTO=TCP SPT=33264 DPT=9100 SEQ=3466127006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCBCFD0000000001030307) Oct 5 05:23:17 localhost python3.9[157993]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:23:17 localhost ovs-vsctl[157994]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Oct 5 05:23:17 localhost ovn_controller[157794]: 2025-10-05T09:23:17Z|00038|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:17 localhost ovn_controller[157794]: 2025-10-05T09:23:17Z|00039|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:17 localhost python3.9[158086]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:23:17 localhost ovs-vsctl[158088]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Oct 5 05:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28135 DF PROTO=TCP SPT=33264 DPT=9100 SEQ=3466127006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCC11D0000000001030307) Oct 5 05:23:18 localhost ovn_controller[157794]: 2025-10-05T09:23:18Z|00040|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:23:18 localhost python3.9[158182]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:23:18 localhost ovs-vsctl[158183]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Oct 5 05:23:19 localhost systemd-logind[760]: Session 51 logged out. Waiting for processes to exit. Oct 5 05:23:19 localhost systemd[1]: session-51.scope: Deactivated successfully. Oct 5 05:23:19 localhost systemd[1]: session-51.scope: Consumed 41.507s CPU time. Oct 5 05:23:19 localhost systemd-logind[760]: Removed session 51. Oct 5 05:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28136 DF PROTO=TCP SPT=33264 DPT=9100 SEQ=3466127006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCC91D0000000001030307) Oct 5 05:23:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2053 DF PROTO=TCP SPT=50502 DPT=9102 SEQ=194708935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCCB1D0000000001030307) Oct 5 05:23:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39256 DF PROTO=TCP SPT=56184 DPT=9105 SEQ=852320248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCD12F0000000001030307) Oct 5 05:23:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25215 DF PROTO=TCP SPT=51188 DPT=9882 SEQ=665177074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCDADD0000000001030307) Oct 5 05:23:24 localhost ovn_controller[157794]: 2025-10-05T09:23:24Z|00041|binding|INFO|Setting lport 4db5c636-3094-4e86-9093-8123489e64be ovn-installed in OVS Oct 5 05:23:24 localhost ovn_controller[157794]: 2025-10-05T09:23:24Z|00042|binding|INFO|Setting lport 4db5c636-3094-4e86-9093-8123489e64be up in Southbound Oct 5 05:23:25 localhost sshd[158274]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:23:25 localhost systemd-logind[760]: New session 53 of user zuul. Oct 5 05:23:25 localhost systemd[1]: Started Session 53 of User zuul. Oct 5 05:23:26 localhost python3.9[158367]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:23:26 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:23:26 localhost systemd[157822]: Activating special unit Exit the Session... Oct 5 05:23:26 localhost systemd[157822]: Stopped target Main User Target. Oct 5 05:23:26 localhost systemd[157822]: Stopped target Basic System. Oct 5 05:23:26 localhost systemd[157822]: Stopped target Paths. Oct 5 05:23:26 localhost systemd[157822]: Stopped target Sockets. Oct 5 05:23:26 localhost systemd[157822]: Stopped target Timers. Oct 5 05:23:26 localhost systemd[157822]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:23:26 localhost systemd[157822]: Closed D-Bus User Message Bus Socket. Oct 5 05:23:26 localhost systemd[157822]: Stopped Create User's Volatile Files and Directories. Oct 5 05:23:26 localhost systemd[157822]: Removed slice User Application Slice. Oct 5 05:23:26 localhost systemd[157822]: Reached target Shutdown. Oct 5 05:23:26 localhost systemd[157822]: Finished Exit the Session. Oct 5 05:23:26 localhost systemd[157822]: Reached target Exit the Session. Oct 5 05:23:26 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:23:26 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:23:26 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:23:26 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:23:26 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:23:26 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:23:26 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:23:27 localhost python3.9[158464]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:28 localhost python3.9[158556]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25216 DF PROTO=TCP SPT=51188 DPT=9882 SEQ=665177074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDCEA9E0000000001030307) Oct 5 05:23:28 localhost python3.9[158648]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:29 localhost python3.9[158740]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:30 localhost python3.9[158832]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:30 localhost python3.9[158922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:23:31 localhost python3.9[159014]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 5 05:23:32 localhost python3.9[159104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:33 localhost python3.9[159177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656211.8080294-218-223313439058333/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:34 localhost python3.9[159267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:34 localhost python3.9[159340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656213.9085395-263-82021733786607/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:36 localhost python3.9[159433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:23:37 localhost python3.9[159487]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:23:41 localhost python3.9[159581]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:23:42 localhost python3.9[159674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:43 localhost python3.9[159745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656222.372794-374-233080457347011/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3459 DF PROTO=TCP SPT=52772 DPT=9102 SEQ=3044259404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD249D0000000001030307) Oct 5 05:23:43 localhost python3.9[159835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:44 localhost python3.9[159906]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656223.412681-374-159539902857066/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3460 DF PROTO=TCP SPT=52772 DPT=9102 SEQ=3044259404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD289E0000000001030307) Oct 5 05:23:46 localhost python3.9[159996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3461 DF PROTO=TCP SPT=52772 DPT=9102 SEQ=3044259404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD309D0000000001030307) Oct 5 05:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:23:46 localhost python3.9[160067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656225.678098-506-27517246645496/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:46 localhost podman[160068]: 2025-10-05 09:23:46.667505576 +0000 UTC m=+0.074445175 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:23:46 localhost ovn_controller[157794]: 2025-10-05T09:23:46Z|00043|memory|INFO|18884 kB peak resident set size after 30.1 seconds Oct 5 05:23:46 localhost ovn_controller[157794]: 2025-10-05T09:23:46Z|00044|memory|INFO|idl-cells-OVN_Southbound:3978 idl-cells-Open_vSwitch:1045 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:76 lflow-cache-entries-cache-matches:195 lflow-cache-size-KB:288 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:152 ofctrl_installed_flow_usage-KB:111 ofctrl_sb_flow_ref_usage-KB:66 Oct 5 05:23:46 localhost podman[160068]: 2025-10-05 09:23:46.701684317 +0000 UTC m=+0.108623896 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:23:46 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:23:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12824 DF PROTO=TCP SPT=37678 DPT=9100 SEQ=172093376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD322D0000000001030307) Oct 5 05:23:47 localhost python3.9[160182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12825 DF PROTO=TCP SPT=37678 DPT=9100 SEQ=172093376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD361D0000000001030307) Oct 5 05:23:48 localhost python3.9[160253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656226.7550278-506-103301943916254/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:48 localhost python3.9[160343]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:23:49 localhost python3.9[160437]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12826 DF PROTO=TCP SPT=37678 DPT=9100 SEQ=172093376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD3E1D0000000001030307) Oct 5 05:23:50 localhost python3.9[160529]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3462 DF PROTO=TCP SPT=52772 DPT=9102 SEQ=3044259404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD405D0000000001030307) Oct 5 05:23:50 localhost python3.9[160577]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:51 localhost python3.9[160669]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:51 localhost python3.9[160717]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13954 DF PROTO=TCP SPT=58732 DPT=9105 SEQ=226114359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD465F0000000001030307) Oct 5 05:23:52 localhost python3.9[160809]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:52 localhost python3.9[160901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:53 localhost python3.9[160949]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:54 localhost python3.9[161041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:54 localhost python3.9[161089]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58325 DF PROTO=TCP SPT=53600 DPT=9882 SEQ=599701021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD501E0000000001030307) Oct 5 05:23:54 localhost ovn_controller[157794]: 2025-10-05T09:23:54Z|00045|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory Oct 5 05:23:55 localhost python3.9[161181]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:23:55 localhost systemd[1]: Reloading. Oct 5 05:23:55 localhost systemd-rc-local-generator[161205]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:55 localhost systemd-sysv-generator[161211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:56 localhost python3.9[161311]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:57 localhost python3.9[161359]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:58 localhost python3.9[161451]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58326 DF PROTO=TCP SPT=53600 DPT=9882 SEQ=599701021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD5FDE0000000001030307) Oct 5 05:23:59 localhost python3.9[161499]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:00 localhost python3.9[161591]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:00 localhost systemd[1]: Reloading. Oct 5 05:24:00 localhost systemd-rc-local-generator[161612]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:00 localhost systemd-sysv-generator[161616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:00 localhost systemd[1]: Starting Create netns directory... Oct 5 05:24:00 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:24:00 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:24:00 localhost systemd[1]: Finished Create netns directory. Oct 5 05:24:01 localhost python3.9[161727]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:24:02 localhost python3.9[161819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:24:02 localhost python3.9[161892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656241.5228002-959-278552070560741/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:24:03 localhost python3.9[161984]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:24:04 localhost python3.9[162076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:24:04 localhost python3.9[162151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656243.5844915-1034-260804176326977/.source.json _original_basename=.fixet_s1 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:05 localhost python3.9[162243]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:07 localhost python3.9[162500]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Oct 5 05:24:08 localhost python3.9[162592]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:24:09 localhost python3.9[162684]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:24:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48334 DF PROTO=TCP SPT=56132 DPT=9102 SEQ=3128359080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD99CE0000000001030307) Oct 5 05:24:13 localhost python3[162802]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:24:13 localhost python3[162802]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "484a8e9b317dc3c79222f8881637d84827689f07b39da081149288f7f4e4c6e5",#012 "Digest": "sha256:233c16d7dd07b08322829bae5a63ad7cffcf46ecf4af5469ace57d26ee006607",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:233c16d7dd07b08322829bae5a63ad7cffcf46ecf4af5469ace57d26ee006607"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:30:29.428510147Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784020738,#012 "VirtualSize": 784020738,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/5dec2b237273ccb78113c2b1c492ef164c4f5b231452e08517989bb84e3d4334/diff:/var/lib/containers/storage/overlay/742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:78752b72dcf3ae244a81cb8c65b7d5fdd7f58198588f5b7d6f1b871b40a43830",#012 "sha256:ae3018f56d99031ced3e0313d6ced246defa366d2edcaf6c9a695cd7ecd3992d",#012 "sha256:a6b2e01de070886feb7ef7949f5a4cea2598b7418a8c15d220d6eb5abb98b85b"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.con Oct 5 05:24:13 localhost podman[162849]: 2025-10-05 09:24:13.938526801 +0000 UTC m=+0.089178898 container remove cadfe83bef154b9261ca23878386964d1e6f474bc5844d54f3189cb640dba87b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9e8d2afb999998c163aa5ea4d40dbbed'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:24:13 localhost python3[162802]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Oct 5 05:24:14 localhost podman[162863]: Oct 5 05:24:14 localhost podman[162863]: 2025-10-05 09:24:14.034565193 +0000 UTC m=+0.076887456 container create ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 05:24:14 localhost podman[162863]: 2025-10-05 09:24:13.992005835 +0000 UTC m=+0.034328118 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:24:14 localhost python3[162802]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:24:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48335 DF PROTO=TCP SPT=56132 DPT=9102 SEQ=3128359080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDD9DDD0000000001030307) Oct 5 05:24:14 localhost python3.9[162991]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:24:15 localhost python3.9[163085]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:16 localhost python3.9[163131]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:24:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48336 DF PROTO=TCP SPT=56132 DPT=9102 SEQ=3128359080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDA5DD0000000001030307) Oct 5 05:24:16 localhost python3.9[163222]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656256.0894415-1298-171339807855563/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55780 DF PROTO=TCP SPT=45506 DPT=9100 SEQ=2817958697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDA75D0000000001030307) Oct 5 05:24:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:24:17 localhost podman[163269]: 2025-10-05 09:24:17.111188307 +0000 UTC m=+0.092812216 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:24:17 localhost systemd[1]: tmp-crun.xdxJCU.mount: Deactivated successfully. Oct 5 05:24:17 localhost podman[163269]: 2025-10-05 09:24:17.150751104 +0000 UTC m=+0.132374993 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:24:17 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:24:17 localhost python3.9[163268]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:24:17 localhost systemd[1]: Reloading. Oct 5 05:24:17 localhost systemd-rc-local-generator[163318]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:17 localhost systemd-sysv-generator[163322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55781 DF PROTO=TCP SPT=45506 DPT=9100 SEQ=2817958697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDAB5D0000000001030307) Oct 5 05:24:18 localhost python3.9[163373]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:18 localhost systemd[1]: Reloading. Oct 5 05:24:18 localhost systemd-rc-local-generator[163399]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:18 localhost systemd-sysv-generator[163404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:18 localhost systemd[1]: Starting ovn_metadata_agent container... Oct 5 05:24:18 localhost systemd[1]: Started libcrun container. Oct 5 05:24:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ddcb4f266cbb116f20a2245c6ef1870dce9add46896b6ee59f1b90a01edf4bb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:24:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ddcb4f266cbb116f20a2245c6ef1870dce9add46896b6ee59f1b90a01edf4bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:24:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:24:18 localhost podman[163414]: 2025-10-05 09:24:18.813003131 +0000 UTC m=+0.158103788 container init ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + sudo -E kolla_set_configs Oct 5 05:24:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:24:18 localhost podman[163414]: 2025-10-05 09:24:18.854432549 +0000 UTC m=+0.199533206 container start ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 05:24:18 localhost edpm-start-podman-container[163414]: ovn_metadata_agent Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Validating config file Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Copying service configuration files Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Writing out command to execute Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: ++ cat /run_command Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + CMD=neutron-ovn-metadata-agent Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + ARGS= Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + sudo kolla_copy_cacerts Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + [[ ! -n '' ]] Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + . kolla_extend_start Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: Running command: 'neutron-ovn-metadata-agent' Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + umask 0022 Oct 5 05:24:18 localhost ovn_metadata_agent[163429]: + exec neutron-ovn-metadata-agent Oct 5 05:24:18 localhost podman[163437]: 2025-10-05 09:24:18.94855744 +0000 UTC m=+0.088280544 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 05:24:19 localhost podman[163437]: 2025-10-05 09:24:19.034656013 +0000 UTC m=+0.174379127 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:24:19 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:24:19 localhost edpm-start-podman-container[163413]: Creating additional drop-in dependency for "ovn_metadata_agent" (ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa) Oct 5 05:24:19 localhost systemd[1]: Reloading. Oct 5 05:24:19 localhost systemd-rc-local-generator[163505]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:19 localhost systemd-sysv-generator[163509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:19 localhost systemd[1]: Started ovn_metadata_agent container. Oct 5 05:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55782 DF PROTO=TCP SPT=45506 DPT=9100 SEQ=2817958697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDB35D0000000001030307) Oct 5 05:24:20 localhost systemd[1]: session-53.scope: Deactivated successfully. Oct 5 05:24:20 localhost systemd[1]: session-53.scope: Consumed 32.253s CPU time. Oct 5 05:24:20 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Oct 5 05:24:20 localhost systemd-logind[760]: Removed session 53. Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.386 163434 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.386 163434 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.386 163434 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.387 163434 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.388 163434 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.389 163434 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.390 163434 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.391 163434 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.392 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.393 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.394 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.395 163434 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.396 163434 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.397 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.398 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.399 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.400 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.401 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.402 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.403 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.404 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.405 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.406 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.407 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.408 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.409 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.410 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.411 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.412 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.413 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.414 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.415 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.416 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.417 163434 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.426 163434 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.426 163434 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.426 163434 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.427 163434 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.427 163434 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.443 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 3b30d637-702a-429f-9027-888244ff6474 (UUID: 3b30d637-702a-429f-9027-888244ff6474) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.458 163434 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.458 163434 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.458 163434 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.458 163434 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.460 163434 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.461 163434 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.468 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:2c:a3 192.168.0.56'], port_security=['fa:16:3e:a6:2c:a3 192.168.0.56'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.56/24', 'neutron:device_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005471150.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4fbe78ed-92dd-4e52-8c97-e662f3cb3af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f49a96c-a4ec-4b07-9e41-306ef014a4cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4db5c636-3094-4e86-9093-8123489e64be) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.469 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '3b30d637-702a-429f-9027-888244ff6474'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '90352bbc-1467-52d1-9e15-4f1d6fd40d7a', 'neutron:ovn-metadata-sb-cfg': '1'}, name=3b30d637-702a-429f-9027-888244ff6474, nb_cfg_timestamp=1759656205268, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.470 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 4db5c636-3094-4e86-9093-8123489e64be in datapath 20d6a6dc-0f38-4a89-b3fc-56befd04e92f bound to our chassis on insert#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.470 163434 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.471 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.471 163434 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.471 163434 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.471 163434 INFO oslo_service.service [-] Starting 1 workers#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.473 163434 DEBUG oslo_service.service [-] Started child 163532 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.476 163434 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20d6a6dc-0f38-4a89-b3fc-56befd04e92f#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.477 163434 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpxd7j78mp/privsep.sock']#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.477 163532 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-162769'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.498 163532 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.498 163532 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.498 163532 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.501 163532 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.503 163532 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Oct 5 05:24:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48337 DF PROTO=TCP SPT=56132 DPT=9102 SEQ=3128359080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDB59D0000000001030307) Oct 5 05:24:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.517 163532 INFO eventlet.wsgi.server [-] (163532) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:21.136 163434 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:21.137 163434 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxd7j78mp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.986 163567 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.993 163567 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.996 163567 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:20.996 163567 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163567#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:21.140 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[fad5a3a2-d296-4f01-a2ce-69e19d1ca84b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:21.560 163567 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:21.560 163567 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:21.560 163567 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:24:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35771 DF PROTO=TCP SPT=34592 DPT=9105 SEQ=2513468127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDBB8F0000000001030307) Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.062 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3249f944-e7f7-42b6-b084-8648fc4e172c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.064 163434 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpqe04oa30/privsep.sock']#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.725 163434 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.726 163434 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqe04oa30/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.580 163625 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.587 163625 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.591 163625 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.591 163625 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163625#033[00m Oct 5 05:24:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:22.729 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[62707605-ba0b-44de-99bc-8048ad7c9685]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.185 163625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.185 163625 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.185 163625 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.657 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[9e49f1f7-62c5-4d27-be4e-92c5b7acbfa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.661 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[a3113f9b-b1a7-40b8-9302-bd9800d91cec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.682 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25a674-5b6d-4d1a-a150-608d961b8c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.695 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[648c10a7-a1d6-40cf-a2f7-aa271c9ffc14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20d6a6dc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 104, 'tx_packets': 68, 'rx_bytes': 8926, 'tx_bytes': 7142, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 104, 'tx_packets': 68, 'rx_bytes': 8926, 'tx_bytes': 7142, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649320, 'reachable_time': 31416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 17, 'outoctets': 1164, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 17, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 1164, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 17, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 163635, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.706 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[990d9ef6-51b0-4f38-b598-1ad1d7eccb35]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20d6a6dc-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649329, 'tstamp': 649329}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 163636, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap20d6a6dc-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649334, 'tstamp': 649334}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 163636, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::a9fe:a9fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649331, 'tstamp': 649331}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 163636, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:95ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649320, 'tstamp': 649320}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 163636, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.743 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[488f9dd7-cda8-4d54-a43b-868f38a193e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.745 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20d6a6dc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.749 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20d6a6dc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.750 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.750 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20d6a6dc-00, col_values=(('external_ids', {'iface-id': 'cd4e79ca-7111-4d41-b9b0-672ba46474d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.751 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:24:23 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:23.755 163434 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp033l7imc/privsep.sock']#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.359 163434 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.360 163434 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp033l7imc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.255 163645 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.260 163645 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.263 163645 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.263 163645 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163645#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.363 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[39516368-135e-4dca-94b5-63622f25cfe0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31012 DF PROTO=TCP SPT=34392 DPT=9882 SEQ=1245865791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDC55D0000000001030307) Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.784 163645 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.785 163645 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:24:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:24.785 163645 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:24:25 localhost sshd[163650]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.237 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[8f180145-f361-4693-bca9-bee4500621dd]: (4, ['ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f']) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.240 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, column=external_ids, values=({'neutron:ovn-metadata-id': '90352bbc-1467-52d1-9e15-4f1d6fd40d7a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.241 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.242 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.249 163434 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.250 163434 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.250 163434 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.250 163434 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.250 163434 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.251 163434 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:24:25 localhost systemd-logind[760]: New session 54 of user zuul. Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.251 163434 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.251 163434 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.252 163434 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.252 163434 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.252 163434 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.252 163434 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.253 163434 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.253 163434 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.253 163434 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.253 163434 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.254 163434 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.254 163434 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.254 163434 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.254 163434 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.254 163434 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.255 163434 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.255 163434 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.255 163434 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.255 163434 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.256 163434 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.256 163434 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.256 163434 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.257 163434 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.257 163434 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.257 163434 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.257 163434 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.258 163434 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.258 163434 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.258 163434 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.258 163434 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.258 163434 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.259 163434 DEBUG oslo_service.service [-] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.259 163434 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.259 163434 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.259 163434 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.260 163434 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.260 163434 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.260 163434 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.260 163434 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.261 163434 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.261 163434 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.261 163434 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.261 163434 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.262 163434 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.262 163434 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.262 163434 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.262 163434 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.262 163434 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.263 163434 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.263 163434 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.263 163434 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.263 163434 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.263 163434 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.264 163434 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.264 163434 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.264 163434 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.264 163434 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.265 163434 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.265 163434 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.265 163434 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.265 163434 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.265 163434 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.266 163434 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.266 163434 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.266 163434 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost systemd[1]: Started Session 54 of User zuul. Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.266 163434 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.267 163434 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.267 163434 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.267 163434 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.267 163434 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.267 163434 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.268 163434 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.268 163434 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.268 163434 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.268 163434 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.269 163434 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.269 163434 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.269 163434 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.269 163434 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.270 163434 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.270 163434 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.270 163434 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.270 163434 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.270 163434 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.271 163434 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.271 163434 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.271 163434 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.271 163434 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.271 163434 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.272 163434 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.272 163434 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.272 163434 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.272 163434 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.273 163434 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.273 163434 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.273 163434 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.273 163434 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.273 163434 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.274 163434 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.274 163434 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.274 163434 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.274 163434 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.275 163434 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.275 163434 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.276 163434 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.276 163434 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.276 163434 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.276 163434 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.277 163434 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.277 163434 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.277 163434 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.277 163434 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.277 163434 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.278 163434 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.278 163434 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.278 163434 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.278 163434 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.279 163434 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.279 163434 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.279 163434 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.279 163434 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.280 163434 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.280 163434 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.280 163434 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.280 163434 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.281 163434 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.281 163434 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.281 163434 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.281 163434 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.282 163434 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.282 163434 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.282 163434 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.282 163434 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.283 163434 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.283 163434 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.283 163434 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.283 163434 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.283 163434 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.284 163434 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.284 163434 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.284 163434 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.284 163434 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.284 163434 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.285 163434 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.285 163434 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.285 163434 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.285 163434 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.286 163434 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.286 163434 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.286 163434 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.286 163434 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.286 163434 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.287 163434 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.287 163434 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.287 163434 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.287 163434 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.288 163434 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.288 163434 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.288 163434 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.288 163434 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.289 163434 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.290 163434 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.291 163434 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.292 163434 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.293 163434 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.294 163434 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.295 163434 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.296 163434 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.297 163434 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.298 163434 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.299 163434 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.300 163434 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.301 163434 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.302 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.303 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.304 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.305 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.306 163434 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.307 163434 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.307 163434 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.307 163434 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.307 163434 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:24:25.307 163434 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:24:26 localhost python3.9[163743]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:24:27 localhost python3.9[163839]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:28 localhost python3.9[163944]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:28 localhost systemd[1]: libpod-8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf.scope: Deactivated successfully. Oct 5 05:24:28 localhost podman[163945]: 2025-10-05 09:24:28.291095679 +0000 UTC m=+0.080162155 container died 8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, tcib_managed=true, build-date=2025-07-21T14:56:59, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 05:24:28 localhost podman[163945]: 2025-10-05 09:24:28.327041949 +0000 UTC m=+0.116108425 container cleanup 8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Oct 5 05:24:28 localhost podman[163961]: 2025-10-05 09:24:28.376066253 +0000 UTC m=+0.078545461 container remove 8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64) Oct 5 05:24:28 localhost systemd[1]: libpod-conmon-8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf.scope: Deactivated successfully. Oct 5 05:24:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31013 DF PROTO=TCP SPT=34392 DPT=9882 SEQ=1245865791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDDD51D0000000001030307) Oct 5 05:24:29 localhost systemd[1]: var-lib-containers-storage-overlay-f372f5b48a6fb930879a487e82e32d29444a8f7e852ff75f52040cd8edeaeeaf-merged.mount: Deactivated successfully. Oct 5 05:24:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f19abd0535de8a3e00db985ea599acaedf590f739b6ad469b66c3014040d1bf-userdata-shm.mount: Deactivated successfully. Oct 5 05:24:29 localhost python3.9[164068]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:24:29 localhost systemd[1]: Reloading. Oct 5 05:24:29 localhost systemd-sysv-generator[164099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:29 localhost systemd-rc-local-generator[164095]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:31 localhost python3.9[164194]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:24:31 localhost network[164211]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:24:31 localhost network[164212]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:24:31 localhost network[164213]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:24:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:35 localhost python3.9[164414]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:35 localhost systemd[1]: Reloading. Oct 5 05:24:35 localhost systemd-sysv-generator[164443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:35 localhost systemd-rc-local-generator[164437]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:36 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Oct 5 05:24:36 localhost python3.9[164546]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:37 localhost python3.9[164639]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:38 localhost python3.9[164732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:39 localhost python3.9[164825]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:40 localhost python3.9[164918]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:41 localhost python3.9[165011]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:42 localhost python3.9[165104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41647 DF PROTO=TCP SPT=34848 DPT=9102 SEQ=1088222660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE0EFE0000000001030307) Oct 5 05:24:43 localhost python3.9[165196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:44 localhost python3.9[165288]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41648 DF PROTO=TCP SPT=34848 DPT=9102 SEQ=1088222660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE131D0000000001030307) Oct 5 05:24:44 localhost python3.9[165380]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:45 localhost python3.9[165472]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41649 DF PROTO=TCP SPT=34848 DPT=9102 SEQ=1088222660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE1B1D0000000001030307) Oct 5 05:24:46 localhost python3.9[165564]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41520 DF PROTO=TCP SPT=41000 DPT=9100 SEQ=1815893499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE1C8D0000000001030307) Oct 5 05:24:47 localhost python3.9[165656]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:24:47 localhost systemd[1]: tmp-crun.55h9hx.mount: Deactivated successfully. Oct 5 05:24:47 localhost podman[165730]: 2025-10-05 09:24:47.689013236 +0000 UTC m=+0.092305903 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:24:47 localhost podman[165730]: 2025-10-05 09:24:47.765234953 +0000 UTC m=+0.168527630 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:24:47 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:24:47 localhost python3.9[165760]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41521 DF PROTO=TCP SPT=41000 DPT=9100 SEQ=1815893499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE209E0000000001030307) Oct 5 05:24:48 localhost python3.9[165865]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:49 localhost python3.9[165957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:24:49 localhost podman[166049]: 2025-10-05 09:24:49.681894366 +0000 UTC m=+0.088364405 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:24:49 localhost podman[166049]: 2025-10-05 09:24:49.687826576 +0000 UTC m=+0.094296635 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:24:49 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:24:49 localhost python3.9[166050]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41522 DF PROTO=TCP SPT=41000 DPT=9100 SEQ=1815893499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE289D0000000001030307) Oct 5 05:24:50 localhost python3.9[166158]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41650 DF PROTO=TCP SPT=34848 DPT=9102 SEQ=1088222660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE2ADE0000000001030307) Oct 5 05:24:51 localhost python3.9[166250]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:51 localhost python3.9[166342]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56669 DF PROTO=TCP SPT=39398 DPT=9105 SEQ=976298744 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE30BF0000000001030307) Oct 5 05:24:52 localhost python3.9[166434]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:53 localhost python3.9[166526]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:24:54 localhost python3.9[166618]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:24:54 localhost systemd[1]: Reloading. Oct 5 05:24:54 localhost systemd-sysv-generator[166643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:54 localhost systemd-rc-local-generator[166639]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3496 DF PROTO=TCP SPT=51646 DPT=9882 SEQ=2266976425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE3A9D0000000001030307) Oct 5 05:24:55 localhost python3.9[166746]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:56 localhost python3.9[166839]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:58 localhost python3.9[166932]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3497 DF PROTO=TCP SPT=51646 DPT=9882 SEQ=2266976425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE4A5D0000000001030307) Oct 5 05:24:59 localhost python3.9[167025]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:59 localhost python3.9[167118]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:25:00 localhost python3.9[167211]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:25:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:25:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.007 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564bb61f3610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo Oct 5 05:25:00 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 76.0 (253 of 333 items), suggesting rotation. Oct 5 05:25:00 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:25:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:25:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:25:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:25:00 localhost python3.9[167305]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:25:03 localhost python3.9[167398]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Oct 5 05:25:03 localhost python3.9[167491]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:25:04 localhost python3.9[167589]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471150.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 5 05:25:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:25:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5451 writes, 24K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5451 writes, 723 syncs, 7.54 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.014 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eb89d542d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Oct 5 05:25:06 localhost python3.9[167689]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:25:06 localhost python3.9[167743]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:25:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56508 DF PROTO=TCP SPT=56248 DPT=9102 SEQ=2561276847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE842E0000000001030307) Oct 5 05:25:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56509 DF PROTO=TCP SPT=56248 DPT=9102 SEQ=2561276847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE881E0000000001030307) Oct 5 05:25:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56510 DF PROTO=TCP SPT=56248 DPT=9102 SEQ=2561276847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE901E0000000001030307) Oct 5 05:25:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30705 DF PROTO=TCP SPT=50610 DPT=9100 SEQ=3024462716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE91BD0000000001030307) Oct 5 05:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30706 DF PROTO=TCP SPT=50610 DPT=9100 SEQ=3024462716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE95DD0000000001030307) Oct 5 05:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:25:18 localhost podman[167818]: 2025-10-05 09:25:18.692742959 +0000 UTC m=+0.092788428 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:25:18 localhost podman[167818]: 2025-10-05 09:25:18.727177047 +0000 UTC m=+0.127222486 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:25:18 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:25:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30707 DF PROTO=TCP SPT=50610 DPT=9100 SEQ=3024462716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE9DDD0000000001030307) Oct 5 05:25:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:25:20.419 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:25:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:25:20.419 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:25:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:25:20.421 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:25:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56511 DF PROTO=TCP SPT=56248 DPT=9102 SEQ=2561276847 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDE9FDD0000000001030307) Oct 5 05:25:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:25:20 localhost systemd[1]: tmp-crun.6Yvnhg.mount: Deactivated successfully. Oct 5 05:25:20 localhost podman[167846]: 2025-10-05 09:25:20.686112669 +0000 UTC m=+0.097729063 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:25:20 localhost podman[167846]: 2025-10-05 09:25:20.716654431 +0000 UTC m=+0.128270775 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:25:20 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:25:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63552 DF PROTO=TCP SPT=34696 DPT=9105 SEQ=3693141301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDEA5EF0000000001030307) Oct 5 05:25:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40894 DF PROTO=TCP SPT=54650 DPT=9882 SEQ=1437820820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDEAF9D0000000001030307) Oct 5 05:25:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40895 DF PROTO=TCP SPT=54650 DPT=9882 SEQ=1437820820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDEBF5D0000000001030307) Oct 5 05:25:31 localhost kernel: SELinux: Converting 2760 SID table entries... Oct 5 05:25:31 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:31 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:41 localhost kernel: SELinux: Converting 2763 SID table entries... Oct 5 05:25:41 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:41 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11698 DF PROTO=TCP SPT=36250 DPT=9102 SEQ=3722902990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDEF95D0000000001030307) Oct 5 05:25:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11699 DF PROTO=TCP SPT=36250 DPT=9102 SEQ=3722902990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDEFD5D0000000001030307) Oct 5 05:25:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11700 DF PROTO=TCP SPT=36250 DPT=9102 SEQ=3722902990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF055D0000000001030307) Oct 5 05:25:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33649 DF PROTO=TCP SPT=46176 DPT=9100 SEQ=3264244624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF06ED0000000001030307) Oct 5 05:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33650 DF PROTO=TCP SPT=46176 DPT=9100 SEQ=3264244624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF0ADE0000000001030307) Oct 5 05:25:49 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=20 res=1 Oct 5 05:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:25:49 localhost podman[168959]: 2025-10-05 09:25:49.699572073 +0000 UTC m=+0.097016704 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:25:49 localhost podman[168959]: 2025-10-05 09:25:49.768991503 +0000 UTC m=+0.166436114 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:25:49 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:25:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33651 DF PROTO=TCP SPT=46176 DPT=9100 SEQ=3264244624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF12DD0000000001030307) Oct 5 05:25:50 localhost kernel: SELinux: Converting 2763 SID table entries... Oct 5 05:25:50 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:50 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:50 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:50 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:50 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:50 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:50 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11701 DF PROTO=TCP SPT=36250 DPT=9102 SEQ=3722902990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF151D0000000001030307) Oct 5 05:25:51 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=21 res=1 Oct 5 05:25:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:25:51 localhost systemd[1]: tmp-crun.adSEZc.mount: Deactivated successfully. Oct 5 05:25:51 localhost podman[168993]: 2025-10-05 09:25:51.705366021 +0000 UTC m=+0.101894156 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:25:51 localhost podman[168993]: 2025-10-05 09:25:51.740819146 +0000 UTC m=+0.137347251 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:25:51 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:25:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61959 DF PROTO=TCP SPT=36436 DPT=9105 SEQ=3534112545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF1B1F0000000001030307) Oct 5 05:25:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29362 DF PROTO=TCP SPT=43210 DPT=9882 SEQ=2285550349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF24DE0000000001030307) Oct 5 05:25:58 localhost kernel: SELinux: Converting 2763 SID table entries... Oct 5 05:25:58 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:58 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:58 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:58 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:58 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:58 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:58 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29363 DF PROTO=TCP SPT=43210 DPT=9882 SEQ=2285550349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF349E0000000001030307) Oct 5 05:26:07 localhost kernel: SELinux: Converting 2763 SID table entries... Oct 5 05:26:07 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:26:07 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:26:07 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:26:07 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:26:07 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:26:07 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:26:07 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19384 DF PROTO=TCP SPT=33140 DPT=9102 SEQ=4143220604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF6E8D0000000001030307) Oct 5 05:26:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19385 DF PROTO=TCP SPT=33140 DPT=9102 SEQ=4143220604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF729D0000000001030307) Oct 5 05:26:15 localhost kernel: SELinux: Converting 2763 SID table entries... Oct 5 05:26:15 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:26:15 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:26:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19386 DF PROTO=TCP SPT=33140 DPT=9102 SEQ=4143220604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF7A9D0000000001030307) Oct 5 05:26:16 localhost systemd[1]: Reloading. Oct 5 05:26:16 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=24 res=1 Oct 5 05:26:16 localhost systemd-rc-local-generator[169063]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:26:16 localhost systemd-sysv-generator[169068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:26:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:26:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59076 DF PROTO=TCP SPT=51028 DPT=9100 SEQ=3974619929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF7C1C0000000001030307) Oct 5 05:26:17 localhost systemd[1]: Reloading. Oct 5 05:26:17 localhost systemd-rc-local-generator[169103]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:26:17 localhost systemd-sysv-generator[169107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:26:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59077 DF PROTO=TCP SPT=51028 DPT=9100 SEQ=3974619929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF801E0000000001030307) Oct 5 05:26:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59078 DF PROTO=TCP SPT=51028 DPT=9100 SEQ=3974619929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF881D0000000001030307) Oct 5 05:26:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:26:20.421 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:26:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:26:20.423 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:26:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:26:20.424 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:26:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19387 DF PROTO=TCP SPT=33140 DPT=9102 SEQ=4143220604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF8A5E0000000001030307) Oct 5 05:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:26:20 localhost podman[169122]: 2025-10-05 09:26:20.676562451 +0000 UTC m=+0.086190045 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible) Oct 5 05:26:20 localhost podman[169122]: 2025-10-05 09:26:20.751912892 +0000 UTC m=+0.161540526 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:26:20 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:26:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49005 DF PROTO=TCP SPT=37238 DPT=9105 SEQ=971870074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF904F0000000001030307) Oct 5 05:26:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:26:22 localhost systemd[1]: tmp-crun.8YRLE8.mount: Deactivated successfully. Oct 5 05:26:22 localhost podman[169149]: 2025-10-05 09:26:22.693486742 +0000 UTC m=+0.093458323 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:26:22 localhost podman[169149]: 2025-10-05 09:26:22.729940321 +0000 UTC m=+0.129911882 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:26:22 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:26:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2728 DF PROTO=TCP SPT=48270 DPT=9882 SEQ=1113154354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDF9A1D0000000001030307) Oct 5 05:26:25 localhost kernel: SELinux: Converting 2764 SID table entries... Oct 5 05:26:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:26:25 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:26:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:26:25 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:26:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:26:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:26:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:26:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2729 DF PROTO=TCP SPT=48270 DPT=9882 SEQ=1113154354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFA9DD0000000001030307) Oct 5 05:26:29 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 05:26:29 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=25 res=1 Oct 5 05:26:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20556 DF PROTO=TCP SPT=59958 DPT=9102 SEQ=2960170720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFE3BE0000000001030307) Oct 5 05:26:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20557 DF PROTO=TCP SPT=59958 DPT=9102 SEQ=2960170720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFE7DD0000000001030307) Oct 5 05:26:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20558 DF PROTO=TCP SPT=59958 DPT=9102 SEQ=2960170720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFEFDD0000000001030307) Oct 5 05:26:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51734 DF PROTO=TCP SPT=46004 DPT=9100 SEQ=2042777912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFF14D0000000001030307) Oct 5 05:26:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51735 DF PROTO=TCP SPT=46004 DPT=9100 SEQ=2042777912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFF55D0000000001030307) Oct 5 05:26:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51736 DF PROTO=TCP SPT=46004 DPT=9100 SEQ=2042777912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFFD5D0000000001030307) Oct 5 05:26:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20559 DF PROTO=TCP SPT=59958 DPT=9102 SEQ=2960170720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEDFFF9D0000000001030307) Oct 5 05:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:26:51 localhost podman[175427]: 2025-10-05 09:26:51.701903335 +0000 UTC m=+0.089669240 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller) Oct 5 05:26:51 localhost podman[175427]: 2025-10-05 09:26:51.74488302 +0000 UTC m=+0.132648895 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:26:51 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:26:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26688 DF PROTO=TCP SPT=60606 DPT=9105 SEQ=2863974069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0057F0000000001030307) Oct 5 05:26:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:26:53 localhost podman[176820]: 2025-10-05 09:26:53.679623855 +0000 UTC m=+0.080266649 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:26:53 localhost podman[176820]: 2025-10-05 09:26:53.68982987 +0000 UTC m=+0.090472704 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:26:53 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:26:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22757 DF PROTO=TCP SPT=35936 DPT=9882 SEQ=550081684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE00F5E0000000001030307) Oct 5 05:26:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22758 DF PROTO=TCP SPT=35936 DPT=9882 SEQ=550081684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE01F1D0000000001030307) Oct 5 05:27:12 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 05:27:12 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 05:27:12 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 05:27:12 localhost systemd[1]: sshd.service: Consumed 1.232s CPU time, no IO. Oct 5 05:27:12 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 05:27:12 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 05:27:12 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:27:12 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:27:12 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:27:12 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 05:27:12 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 05:27:12 localhost sshd[186997]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:27:12 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 05:27:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3305 DF PROTO=TCP SPT=44472 DPT=9102 SEQ=488240476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE058EE0000000001030307) Oct 5 05:27:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3306 DF PROTO=TCP SPT=44472 DPT=9102 SEQ=488240476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE05CDD0000000001030307) Oct 5 05:27:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:27:14 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 05:27:14 localhost systemd[1]: Reloading. Oct 5 05:27:14 localhost systemd-rc-local-generator[187596]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:14 localhost systemd-sysv-generator[187601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:15 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 05:27:15 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3307 DF PROTO=TCP SPT=44472 DPT=9102 SEQ=488240476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE064DD0000000001030307) Oct 5 05:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59809 DF PROTO=TCP SPT=38084 DPT=9100 SEQ=1845346906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0667D0000000001030307) Oct 5 05:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59810 DF PROTO=TCP SPT=38084 DPT=9100 SEQ=1845346906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE06A9D0000000001030307) Oct 5 05:27:19 localhost python3.9[192378]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:19 localhost systemd[1]: Reloading. Oct 5 05:27:19 localhost systemd-rc-local-generator[192802]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:19 localhost systemd-sysv-generator[192805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59811 DF PROTO=TCP SPT=38084 DPT=9100 SEQ=1845346906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0729D0000000001030307) Oct 5 05:27:20 localhost python3.9[193444]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:20 localhost systemd[1]: Reloading. Oct 5 05:27:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:27:20.422 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:27:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:27:20.423 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:27:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:27:20.425 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:27:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3308 DF PROTO=TCP SPT=44472 DPT=9102 SEQ=488240476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0749E0000000001030307) Oct 5 05:27:20 localhost systemd-sysv-generator[193559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:20 localhost systemd-rc-local-generator[193555]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:21 localhost python3.9[193990]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:21 localhost systemd[1]: Reloading. Oct 5 05:27:21 localhost systemd-rc-local-generator[194264]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:21 localhost systemd-sysv-generator[194270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:27:21 localhost podman[194369]: 2025-10-05 09:27:21.966544837 +0000 UTC m=+0.086044063 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:27:22 localhost podman[194369]: 2025-10-05 09:27:22.004941666 +0000 UTC m=+0.124440912 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:27:22 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:27:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8334 DF PROTO=TCP SPT=34294 DPT=9105 SEQ=1051745972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE07AAF0000000001030307) Oct 5 05:27:22 localhost python3.9[194710]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:23 localhost systemd[1]: Reloading. Oct 5 05:27:23 localhost systemd-sysv-generator[195460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:23 localhost systemd-rc-local-generator[195457]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:27:24 localhost podman[195576]: 2025-10-05 09:27:24.114877319 +0000 UTC m=+0.089560415 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:27:24 localhost podman[195576]: 2025-10-05 09:27:24.146093915 +0000 UTC m=+0.120777001 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent) Oct 5 05:27:24 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:27:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54128 DF PROTO=TCP SPT=43682 DPT=9882 SEQ=3386614609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0845D0000000001030307) Oct 5 05:27:24 localhost python3.9[195979]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:24 localhost systemd[1]: Reloading. Oct 5 05:27:25 localhost systemd-rc-local-generator[196238]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:25 localhost systemd-sysv-generator[196245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:26 localhost python3.9[196709]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:26 localhost systemd[1]: Reloading. Oct 5 05:27:26 localhost systemd-sysv-generator[196949]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:26 localhost systemd-rc-local-generator[196946]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:26 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 05:27:26 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 05:27:26 localhost systemd[1]: man-db-cache-update.service: Consumed 14.225s CPU time. Oct 5 05:27:26 localhost systemd[1]: run-rc225fe130700411c8cb1f6b6fb73d7a2.service: Deactivated successfully. Oct 5 05:27:26 localhost systemd[1]: run-r0fc4a42386a0457fb1ea5f2db694d5ef.service: Deactivated successfully. Oct 5 05:27:27 localhost python3.9[197198]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:27 localhost systemd[1]: Reloading. Oct 5 05:27:27 localhost systemd-rc-local-generator[197228]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:27 localhost systemd-sysv-generator[197231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:28 localhost python3.9[197347]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54129 DF PROTO=TCP SPT=43682 DPT=9882 SEQ=3386614609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0941D0000000001030307) Oct 5 05:27:29 localhost python3.9[197460]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:30 localhost systemd[1]: Reloading. Oct 5 05:27:30 localhost systemd-sysv-generator[197507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:30 localhost systemd-rc-local-generator[197504]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:31 localhost python3.9[197626]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:31 localhost systemd[1]: Reloading. Oct 5 05:27:31 localhost systemd-rc-local-generator[197653]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:31 localhost systemd-sysv-generator[197656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:32 localhost python3.9[197775]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:34 localhost python3.9[197888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:36 localhost python3.9[198001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:37 localhost python3.9[198114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:39 localhost python3.9[198227]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:40 localhost python3.9[198340]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:40 localhost python3.9[198453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:41 localhost python3.9[198566]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38396 DF PROTO=TCP SPT=48014 DPT=9102 SEQ=3132266254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0CE1D0000000001030307) Oct 5 05:27:43 localhost python3.9[198679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:44 localhost python3.9[198792]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38397 DF PROTO=TCP SPT=48014 DPT=9102 SEQ=3132266254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0D21E0000000001030307) Oct 5 05:27:46 localhost python3.9[198905]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38398 DF PROTO=TCP SPT=48014 DPT=9102 SEQ=3132266254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0DA1D0000000001030307) Oct 5 05:27:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6582 DF PROTO=TCP SPT=43080 DPT=9100 SEQ=1171743593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0DBAD0000000001030307) Oct 5 05:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6583 DF PROTO=TCP SPT=43080 DPT=9100 SEQ=1171743593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0DF9D0000000001030307) Oct 5 05:27:48 localhost python3.9[199018]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:49 localhost python3.9[199131]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:49 localhost python3.9[199244]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6584 DF PROTO=TCP SPT=43080 DPT=9100 SEQ=1171743593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0E79D0000000001030307) Oct 5 05:27:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38399 DF PROTO=TCP SPT=48014 DPT=9102 SEQ=3132266254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0E9DD0000000001030307) Oct 5 05:27:51 localhost python3.9[199357]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63661 DF PROTO=TCP SPT=33910 DPT=9105 SEQ=3145714334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0EFDF0000000001030307) Oct 5 05:27:52 localhost python3.9[199467]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:27:52 localhost systemd[1]: tmp-crun.1V10Fw.mount: Deactivated successfully. Oct 5 05:27:52 localhost podman[199578]: 2025-10-05 09:27:52.581054787 +0000 UTC m=+0.096067920 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:27:52 localhost podman[199578]: 2025-10-05 09:27:52.661071031 +0000 UTC m=+0.176084164 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:27:52 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:27:52 localhost python3.9[199577]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:53 localhost python3.9[199713]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:53 localhost python3.9[199823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:27:54 localhost podman[199934]: 2025-10-05 09:27:54.479697453 +0000 UTC m=+0.093124891 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 5 05:27:54 localhost podman[199934]: 2025-10-05 09:27:54.485696966 +0000 UTC m=+0.099124384 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:27:54 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:27:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38394 DF PROTO=TCP SPT=52648 DPT=9882 SEQ=1533471210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE0F99D0000000001030307) Oct 5 05:27:54 localhost python3.9[199933]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:55 localhost python3.9[200061]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:56 localhost python3.9[200151]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656474.8090444-1643-208934442441019/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:27:57 localhost python3.9[200261]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:57 localhost python3.9[200351]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656476.5545518-1643-119840811714741/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:27:58 localhost python3.9[200461]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38395 DF PROTO=TCP SPT=52648 DPT=9882 SEQ=1533471210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1095D0000000001030307) Oct 5 05:27:58 localhost python3.9[200551]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656477.8641822-1643-153678704445777/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:27:59 localhost python3.9[200661]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:00 localhost python3.9[200751]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656479.0861726-1643-11506648528868/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:01 localhost python3.9[200861]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:02 localhost python3.9[200951]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656480.907862-1643-78684481696069/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:03 localhost python3.9[201061]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:03 localhost python3.9[201151]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656482.152558-1643-195383769177707/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:04 localhost python3.9[201261]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:04 localhost python3.9[201349]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656483.811744-1643-71608315351052/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:05 localhost python3.9[201459]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:06 localhost python3.9[201549]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656485.0587702-1643-93714220166325/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:06 localhost python3.9[201659]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:07 localhost python3.9[201769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:08 localhost python3.9[201879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:08 localhost python3.9[201989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:09 localhost python3.9[202099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:10 localhost python3.9[202209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:10 localhost python3.9[202319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:11 localhost python3.9[202429]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:12 localhost python3.9[202539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:13 localhost python3.9[202649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46579 DF PROTO=TCP SPT=33382 DPT=9102 SEQ=823113870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1434D0000000001030307) Oct 5 05:28:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46580 DF PROTO=TCP SPT=33382 DPT=9102 SEQ=823113870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1475D0000000001030307) Oct 5 05:28:14 localhost python3.9[202759]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:15 localhost python3.9[202869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:15 localhost python3.9[202979]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:16 localhost python3.9[203089]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46581 DF PROTO=TCP SPT=33382 DPT=9102 SEQ=823113870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE14F5D0000000001030307) Oct 5 05:28:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5075 DF PROTO=TCP SPT=37686 DPT=9100 SEQ=691251080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE150DD0000000001030307) Oct 5 05:28:17 localhost python3.9[203199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:17 localhost python3.9[203309]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5076 DF PROTO=TCP SPT=37686 DPT=9100 SEQ=691251080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE154DD0000000001030307) Oct 5 05:28:18 localhost python3.9[203397]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656497.2340832-2306-237729169909120/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:18 localhost python3.9[203507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:19 localhost python3.9[203595]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656498.3522046-2306-25800928840885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5077 DF PROTO=TCP SPT=37686 DPT=9100 SEQ=691251080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE15CDD0000000001030307) Oct 5 05:28:19 localhost python3.9[203705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:28:20.424 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:28:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:28:20.425 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:28:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:28:20.426 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:28:20 localhost python3.9[203793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656499.482915-2306-219243865887945/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46582 DF PROTO=TCP SPT=33382 DPT=9102 SEQ=823113870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE15F1D0000000001030307) Oct 5 05:28:21 localhost python3.9[203903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:21 localhost python3.9[203991]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656500.6164818-2306-95580410864658/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1723 DF PROTO=TCP SPT=36970 DPT=9105 SEQ=3839358268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1650F0000000001030307) Oct 5 05:28:22 localhost python3.9[204101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:22 localhost python3.9[204189]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656501.8289597-2306-102962602067615/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:28:23 localhost podman[204300]: 2025-10-05 09:28:23.415511068 +0000 UTC m=+0.083590752 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller) Oct 5 05:28:23 localhost podman[204300]: 2025-10-05 09:28:23.449141488 +0000 UTC m=+0.117221122 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:28:23 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:28:23 localhost python3.9[204299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:24 localhost python3.9[204413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656503.000757-2306-92568079756973/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39574 DF PROTO=TCP SPT=53436 DPT=9882 SEQ=1601411053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE16EDD0000000001030307) Oct 5 05:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:28:24 localhost systemd[1]: tmp-crun.FufeJ0.mount: Deactivated successfully. Oct 5 05:28:24 localhost podman[204419]: 2025-10-05 09:28:24.680500531 +0000 UTC m=+0.084868637 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:28:24 localhost podman[204419]: 2025-10-05 09:28:24.714627794 +0000 UTC m=+0.118995940 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:28:24 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:28:25 localhost python3.9[204541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:25 localhost python3.9[204629]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656504.6870613-2306-273084018651875/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:26 localhost python3.9[204739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:27 localhost python3.9[204827]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656505.8742955-2306-279577743429440/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:28 localhost python3.9[204937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39575 DF PROTO=TCP SPT=53436 DPT=9882 SEQ=1601411053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE17E9E0000000001030307) Oct 5 05:28:29 localhost python3.9[205025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656508.0376036-2306-210384040752597/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:29 localhost python3.9[205135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:30 localhost python3.9[205259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656509.1648858-2306-50003117453952/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:30 localhost python3.9[205442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:31 localhost python3.9[205547]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656510.3605547-2306-46497953871870/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:32 localhost python3.9[205675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:32 localhost python3.9[205763]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656511.5765586-2306-162131216487298/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:33 localhost python3.9[205873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:33 localhost python3.9[205961]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656512.741025-2306-267799742382768/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:34 localhost python3.9[206071]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:35 localhost python3.9[206159]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656513.9507701-2306-260300229997974/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:35 localhost python3.9[206267]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:28:36 localhost python3.9[206380]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Oct 5 05:28:37 localhost python3.9[206490]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:37 localhost systemd[1]: Reloading. Oct 5 05:28:37 localhost systemd-rc-local-generator[206511]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:37 localhost systemd-sysv-generator[206515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:38 localhost systemd[1]: Starting libvirt logging daemon socket... Oct 5 05:28:38 localhost systemd[1]: Listening on libvirt logging daemon socket. Oct 5 05:28:38 localhost systemd[1]: Starting libvirt logging daemon admin socket... Oct 5 05:28:38 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Oct 5 05:28:38 localhost systemd[1]: Starting libvirt logging daemon... Oct 5 05:28:38 localhost systemd[1]: Started libvirt logging daemon. Oct 5 05:28:38 localhost python3.9[206642]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:38 localhost systemd[1]: Reloading. Oct 5 05:28:39 localhost systemd-rc-local-generator[206663]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:39 localhost systemd-sysv-generator[206669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon socket... Oct 5 05:28:39 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Oct 5 05:28:39 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Oct 5 05:28:39 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon... Oct 5 05:28:39 localhost systemd[1]: Started libvirt nodedev daemon. Oct 5 05:28:40 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Oct 5 05:28:40 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Oct 5 05:28:40 localhost python3.9[206816]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:40 localhost systemd[1]: Reloading. Oct 5 05:28:40 localhost setroubleshoot[206817]: Deleting alert bc7c4c30-a01c-4646-a4d2-6e9ebe46b7a9, it is allowed in current policy Oct 5 05:28:40 localhost systemd-sysv-generator[206852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:40 localhost systemd-rc-local-generator[206849]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:40 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon socket... Oct 5 05:28:40 localhost systemd[1]: Listening on libvirt proxy daemon socket. Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Oct 5 05:28:40 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Oct 5 05:28:40 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon... Oct 5 05:28:40 localhost systemd[1]: Started libvirt proxy daemon. Oct 5 05:28:41 localhost python3.9[206994]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:41 localhost systemd[1]: Reloading. Oct 5 05:28:41 localhost systemd-rc-local-generator[207019]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:41 localhost systemd-sysv-generator[207022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:41 localhost setroubleshoot[206817]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ae91256d-ae2c-4c9b-9b5c-cbdf221a7747 Oct 5 05:28:41 localhost setroubleshoot[206817]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Oct 5 05:28:41 localhost setroubleshoot[206817]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ae91256d-ae2c-4c9b-9b5c-cbdf221a7747 Oct 5 05:28:41 localhost setroubleshoot[206817]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Oct 5 05:28:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt locking daemon socket. Oct 5 05:28:42 localhost systemd[1]: Starting libvirt QEMU daemon socket... Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Oct 5 05:28:42 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Oct 5 05:28:42 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Oct 5 05:28:42 localhost systemd[1]: Starting libvirt QEMU daemon... Oct 5 05:28:42 localhost systemd[1]: Started libvirt QEMU daemon. Oct 5 05:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13232 DF PROTO=TCP SPT=48140 DPT=9102 SEQ=4219981695 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1B87D0000000001030307) Oct 5 05:28:43 localhost python3.9[207176]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:43 localhost systemd[1]: Reloading. Oct 5 05:28:43 localhost systemd-rc-local-generator[207208]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:43 localhost systemd-sysv-generator[207212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:44 localhost systemd[1]: Starting libvirt secret daemon socket... Oct 5 05:28:44 localhost systemd[1]: Listening on libvirt secret daemon socket. Oct 5 05:28:44 localhost systemd[1]: Starting libvirt secret daemon admin socket... Oct 5 05:28:44 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Oct 5 05:28:44 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Oct 5 05:28:44 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Oct 5 05:28:44 localhost systemd[1]: Starting libvirt secret daemon... Oct 5 05:28:44 localhost systemd[1]: Started libvirt secret daemon. Oct 5 05:28:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13233 DF PROTO=TCP SPT=48140 DPT=9102 SEQ=4219981695 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1BC9D0000000001030307) Oct 5 05:28:44 localhost python3.9[207357]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:45 localhost python3.9[207467]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:28:46 localhost python3.9[207577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:28:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13234 DF PROTO=TCP SPT=48140 DPT=9102 SEQ=4219981695 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1C49D0000000001030307) Oct 5 05:28:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41648 DF PROTO=TCP SPT=56458 DPT=9100 SEQ=945059907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1C60C0000000001030307) Oct 5 05:28:46 localhost python3.9[207689]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:28:47 localhost python3.9[207797]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41649 DF PROTO=TCP SPT=56458 DPT=9100 SEQ=945059907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1CA1D0000000001030307) Oct 5 05:28:49 localhost python3.9[207883]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656527.4583166-3170-122153702646997/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c0cd5a488d0709b14bfd915c93171010d2c54076 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:49 localhost python3.9[207993]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 659062ac-50b4-5607-b699-3105da7f55ee#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:28:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41650 DF PROTO=TCP SPT=56458 DPT=9100 SEQ=945059907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1D21D0000000001030307) Oct 5 05:28:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13235 DF PROTO=TCP SPT=48140 DPT=9102 SEQ=4219981695 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1D45D0000000001030307) Oct 5 05:28:50 localhost python3.9[208113]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:51 localhost sshd[208224]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:28:51 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Oct 5 05:28:51 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Oct 5 05:28:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1967 DF PROTO=TCP SPT=41882 DPT=9105 SEQ=1120035477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1DA400000000001030307) Oct 5 05:28:52 localhost python3.9[208453]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:28:53 localhost python3.9[208563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:53 localhost podman[208564]: 2025-10-05 09:28:53.686032792 +0000 UTC m=+0.088209996 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:28:53 localhost podman[208564]: 2025-10-05 09:28:53.730775023 +0000 UTC m=+0.132952267 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:28:53 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:28:54 localhost python3.9[208676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656533.164915-3335-252279480762256/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64886 DF PROTO=TCP SPT=51198 DPT=9882 SEQ=1782349388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1E41D0000000001030307) Oct 5 05:28:54 localhost python3.9[208786]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:28:55 localhost systemd[1]: tmp-crun.SnMOfS.mount: Deactivated successfully. Oct 5 05:28:55 localhost podman[208896]: 2025-10-05 09:28:55.624732582 +0000 UTC m=+0.079171072 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:28:55 localhost podman[208896]: 2025-10-05 09:28:55.633804189 +0000 UTC m=+0.088242729 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent) Oct 5 05:28:55 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:28:55 localhost python3.9[208897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:56 localhost python3.9[208971]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:56 localhost python3.9[209082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:57 localhost python3.9[209139]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pmf6hdtk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:58 localhost python3.9[209249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:58 localhost python3.9[209306]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64887 DF PROTO=TCP SPT=51198 DPT=9882 SEQ=1782349388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE1F3DE0000000001030307) Oct 5 05:28:59 localhost python3.9[209416]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:00 localhost python3[209527]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:29:00 localhost python3.9[209637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:01 localhost python3.9[209695]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:02 localhost python3.9[209805]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:02 localhost python3.9[209862]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:02 localhost sshd[209950]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:29:03 localhost sshd[209975]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:29:03 localhost python3.9[209974]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:03 localhost python3.9[210033]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:04 localhost python3.9[210144]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:04 localhost python3.9[210201]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:05 localhost python3.9[210311]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:06 localhost python3.9[210401]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656545.203314-3710-87943188994165/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:07 localhost python3.9[210511]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:07 localhost python3.9[210621]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:09 localhost python3.9[210735]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:10 localhost python3.9[210845]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:11 localhost python3.9[210956]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:11 localhost python3.9[211068]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:12 localhost python3.9[211181]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:13 localhost python3.9[211292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55843 DF PROTO=TCP SPT=45776 DPT=9102 SEQ=1272312720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE22DAD0000000001030307) Oct 5 05:29:13 localhost python3.9[211380]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656552.8054607-3926-110975122556789/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55844 DF PROTO=TCP SPT=45776 DPT=9102 SEQ=1272312720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2319D0000000001030307) Oct 5 05:29:14 localhost python3.9[211490]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:14 localhost sshd[211574]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:29:15 localhost python3.9[211580]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656554.125516-3971-102604598640800/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:15 localhost python3.9[211690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:16 localhost python3.9[211779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656555.3252578-4016-158600699059771/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55845 DF PROTO=TCP SPT=45776 DPT=9102 SEQ=1272312720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2399E0000000001030307) Oct 5 05:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20156 DF PROTO=TCP SPT=41700 DPT=9100 SEQ=2038747661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE23B3D0000000001030307) Oct 5 05:29:17 localhost python3.9[211889]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:17 localhost systemd[1]: Reloading. Oct 5 05:29:17 localhost systemd-rc-local-generator[211912]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:17 localhost systemd-sysv-generator[211917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:17 localhost systemd[1]: Reached target edpm_libvirt.target. Oct 5 05:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20157 DF PROTO=TCP SPT=41700 DPT=9100 SEQ=2038747661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE23F5D0000000001030307) Oct 5 05:29:18 localhost python3.9[212038]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 05:29:18 localhost systemd[1]: Reloading. Oct 5 05:29:18 localhost systemd-rc-local-generator[212064]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:18 localhost systemd-sysv-generator[212069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:18 localhost systemd[1]: Reloading. Oct 5 05:29:18 localhost systemd-sysv-generator[212103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:18 localhost systemd-rc-local-generator[212099]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:19 localhost systemd[1]: session-54.scope: Deactivated successfully. Oct 5 05:29:19 localhost systemd[1]: session-54.scope: Consumed 3min 42.017s CPU time. Oct 5 05:29:19 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Oct 5 05:29:19 localhost systemd-logind[760]: Removed session 54. Oct 5 05:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20158 DF PROTO=TCP SPT=41700 DPT=9100 SEQ=2038747661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2475D0000000001030307) Oct 5 05:29:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:29:20.425 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:29:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:29:20.426 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:29:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:29:20.428 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:29:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55846 DF PROTO=TCP SPT=45776 DPT=9102 SEQ=1272312720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2495D0000000001030307) Oct 5 05:29:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47833 DF PROTO=TCP SPT=44772 DPT=9105 SEQ=741082972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE24F6F0000000001030307) Oct 5 05:29:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54569 DF PROTO=TCP SPT=53210 DPT=9882 SEQ=2160872289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2591D0000000001030307) Oct 5 05:29:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:29:24 localhost podman[212133]: 2025-10-05 09:29:24.691003216 +0000 UTC m=+0.093129345 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:29:24 localhost podman[212133]: 2025-10-05 09:29:24.737850104 +0000 UTC m=+0.139976233 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:29:24 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:29:25 localhost sshd[212159]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:29:25 localhost systemd-logind[760]: New session 55 of user zuul. Oct 5 05:29:25 localhost systemd[1]: Started Session 55 of User zuul. Oct 5 05:29:26 localhost python3.9[212270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:29:26 localhost podman[212275]: 2025-10-05 09:29:26.664502987 +0000 UTC m=+0.072604638 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 05:29:26 localhost podman[212275]: 2025-10-05 09:29:26.694837223 +0000 UTC m=+0.102938864 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:29:26 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:29:27 localhost python3.9[212403]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:28 localhost python3.9[212513]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54570 DF PROTO=TCP SPT=53210 DPT=9882 SEQ=2160872289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE268DD0000000001030307) Oct 5 05:29:28 localhost python3.9[212623]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:29 localhost python3.9[212733]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:29:30 localhost python3.9[212843]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:31 localhost python3.9[212953]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:32 localhost python3.9[213133]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:33 localhost systemd[1]: Reloading. Oct 5 05:29:33 localhost systemd-rc-local-generator[213179]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:33 localhost systemd-sysv-generator[213182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:35 localhost python3.9[213300]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:29:35 localhost network[213317]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:29:35 localhost network[213318]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:29:35 localhost network[213319]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:29:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:39 localhost python3.9[213551]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:40 localhost systemd[1]: Reloading. Oct 5 05:29:40 localhost systemd-rc-local-generator[213576]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:40 localhost systemd-sysv-generator[213582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:41 localhost python3.9[213697]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:41 localhost python3.9[213807]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:42 localhost python3.9[213919]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6162 DF PROTO=TCP SPT=36558 DPT=9102 SEQ=2620284964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2A2DE0000000001030307) Oct 5 05:29:43 localhost python3.9[214029]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:44 localhost python3.9[214139]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6163 DF PROTO=TCP SPT=36558 DPT=9102 SEQ=2620284964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2A6DE0000000001030307) Oct 5 05:29:45 localhost python3.9[214196]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:46 localhost python3.9[214306]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6164 DF PROTO=TCP SPT=36558 DPT=9102 SEQ=2620284964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2AEDD0000000001030307) Oct 5 05:29:46 localhost python3.9[214363]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24564 DF PROTO=TCP SPT=45696 DPT=9100 SEQ=1988099746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2B06D0000000001030307) Oct 5 05:29:47 localhost python3.9[214473]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24565 DF PROTO=TCP SPT=45696 DPT=9100 SEQ=1988099746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2B45D0000000001030307) Oct 5 05:29:48 localhost python3.9[214583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:48 localhost python3.9[214640]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:49 localhost python3.9[214750]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24566 DF PROTO=TCP SPT=45696 DPT=9100 SEQ=1988099746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2BC5D0000000001030307) Oct 5 05:29:50 localhost python3.9[214807]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6165 DF PROTO=TCP SPT=36558 DPT=9102 SEQ=2620284964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2BE9D0000000001030307) Oct 5 05:29:51 localhost python3.9[214917]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:51 localhost systemd[1]: Reloading. Oct 5 05:29:51 localhost systemd-rc-local-generator[214940]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:51 localhost systemd-sysv-generator[214944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27296 DF PROTO=TCP SPT=56170 DPT=9105 SEQ=4275784996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2C49F0000000001030307) Oct 5 05:29:52 localhost python3.9[215065]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:52 localhost python3.9[215122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:53 localhost python3.9[215232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:53 localhost python3.9[215289]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65031 DF PROTO=TCP SPT=36640 DPT=9882 SEQ=1497136501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2CE5E0000000001030307) Oct 5 05:29:54 localhost python3.9[215399]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:54 localhost systemd[1]: Reloading. Oct 5 05:29:54 localhost systemd-sysv-generator[215427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:54 localhost systemd-rc-local-generator[215424]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:29:55 localhost systemd[1]: Starting Create netns directory... Oct 5 05:29:55 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:29:55 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:29:55 localhost systemd[1]: Finished Create netns directory. Oct 5 05:29:55 localhost systemd[1]: tmp-crun.1RsgC6.mount: Deactivated successfully. Oct 5 05:29:55 localhost podman[215437]: 2025-10-05 09:29:55.100480964 +0000 UTC m=+0.086234771 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:29:55 localhost podman[215437]: 2025-10-05 09:29:55.139964439 +0000 UTC m=+0.125718226 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:29:55 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:29:56 localhost python3.9[215578]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:29:57 localhost systemd[1]: tmp-crun.rbYFZ1.mount: Deactivated successfully. Oct 5 05:29:57 localhost podman[215689]: 2025-10-05 09:29:57.603651536 +0000 UTC m=+0.095214826 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:29:57 localhost podman[215689]: 2025-10-05 09:29:57.63283032 +0000 UTC m=+0.124393620 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:29:57 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:29:57 localhost python3.9[215688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65032 DF PROTO=TCP SPT=36640 DPT=9882 SEQ=1497136501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE2DE1D0000000001030307) Oct 5 05:29:59 localhost python3.9[215793]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656597.2144082-695-25558523343832/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:00 localhost python3.9[215903]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:01 localhost python3.9[216013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:01 localhost python3.9[216103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656600.8834038-770-71820522337050/.source.json _original_basename=.b6ze_zr7 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:02 localhost python3.9[216213]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:05 localhost python3.9[216521]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False Oct 5 05:30:06 localhost python3.9[216631]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:30:06 localhost python3.9[216741]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:30:11 localhost python3[216877]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:30:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46688 DF PROTO=TCP SPT=50820 DPT=9102 SEQ=778214288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3180E0000000001030307) Oct 5 05:30:13 localhost podman[216892]: 2025-10-05 09:30:11.480101461 +0000 UTC m=+0.046920979 image pull quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 5 05:30:13 localhost python3[216877]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "777353c8928aa59ae2473c1d38acf1eefa9a0dfeca7b821fed936f9ff9383648",#012 "Digest": "sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:14:31.883735142Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 403870347,#012 "VirtualSize": 403870347,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:5517f28613540e56901977cf7926b9c77e610f33e0d02e83afbce9137bbc7d2a"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which Oct 5 05:30:13 localhost podman[216953]: 2025-10-05 09:30:13.700689958 +0000 UTC m=+0.070653000 container remove 5a21a219091a6e3593fb4095fdb325a7488dcd44b7c7093e33f28ac0ff7dd1bb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public) Oct 5 05:30:13 localhost python3[216877]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force iscsid Oct 5 05:30:13 localhost podman[216967]: Oct 5 05:30:13 localhost podman[216967]: 2025-10-05 09:30:13.803479529 +0000 UTC m=+0.082532040 container create 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid) Oct 5 05:30:13 localhost podman[216967]: 2025-10-05 09:30:13.76480947 +0000 UTC m=+0.043862011 image pull quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 5 05:30:13 localhost python3[216877]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 5 05:30:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46689 DF PROTO=TCP SPT=50820 DPT=9102 SEQ=778214288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE31C1D0000000001030307) Oct 5 05:30:14 localhost python3.9[217114]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:15 localhost python3.9[217226]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:15 localhost python3.9[217281]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46690 DF PROTO=TCP SPT=50820 DPT=9102 SEQ=778214288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3241D0000000001030307) Oct 5 05:30:16 localhost python3.9[217390]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656616.0609171-1034-33161149067332/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48581 DF PROTO=TCP SPT=34150 DPT=9100 SEQ=1819679108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3259D0000000001030307) Oct 5 05:30:17 localhost python3.9[217445]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:30:17 localhost systemd[1]: Reloading. Oct 5 05:30:17 localhost systemd-rc-local-generator[217474]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:17 localhost systemd-sysv-generator[217477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48582 DF PROTO=TCP SPT=34150 DPT=9100 SEQ=1819679108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3299D0000000001030307) Oct 5 05:30:18 localhost python3.9[217537]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:30:18 localhost systemd[1]: Reloading. Oct 5 05:30:18 localhost systemd-rc-local-generator[217563]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:18 localhost systemd-sysv-generator[217569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:18 localhost systemd[1]: Starting iscsid container... Oct 5 05:30:18 localhost systemd[1]: Started libcrun container. Oct 5 05:30:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb490b85a5e30771de8f3f649ab3a4d7acfa68677b7ed446bed134ffb28c8fe/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 5 05:30:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb490b85a5e30771de8f3f649ab3a4d7acfa68677b7ed446bed134ffb28c8fe/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:30:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb490b85a5e30771de8f3f649ab3a4d7acfa68677b7ed446bed134ffb28c8fe/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:30:18 localhost podman[217577]: 2025-10-05 09:30:18.856126276 +0000 UTC m=+0.154286590 container init 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:30:18 localhost iscsid[217591]: + sudo -E kolla_set_configs Oct 5 05:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:30:18 localhost podman[217577]: 2025-10-05 09:30:18.906499142 +0000 UTC m=+0.204659456 container start 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 05:30:18 localhost podman[217577]: iscsid Oct 5 05:30:18 localhost systemd[1]: Started iscsid container. Oct 5 05:30:18 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 05:30:18 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 05:30:18 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 05:30:18 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 05:30:19 localhost podman[217599]: 2025-10-05 09:30:19.011130619 +0000 UTC m=+0.098459129 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:30:19 localhost podman[217599]: 2025-10-05 09:30:19.046684137 +0000 UTC m=+0.134012617 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:30:19 localhost podman[217599]: unhealthy Oct 5 05:30:19 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:30:19 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Failed with result 'exit-code'. Oct 5 05:30:19 localhost systemd[217613]: Queued start job for default target Main User Target. Oct 5 05:30:19 localhost systemd[217613]: Created slice User Application Slice. Oct 5 05:30:19 localhost systemd[217613]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 05:30:19 localhost systemd[217613]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:30:19 localhost systemd[217613]: Reached target Paths. Oct 5 05:30:19 localhost systemd[217613]: Reached target Timers. Oct 5 05:30:19 localhost systemd[217613]: Starting D-Bus User Message Bus Socket... Oct 5 05:30:19 localhost systemd[217613]: Starting Create User's Volatile Files and Directories... Oct 5 05:30:19 localhost systemd[217613]: Finished Create User's Volatile Files and Directories. Oct 5 05:30:19 localhost systemd[217613]: Listening on D-Bus User Message Bus Socket. Oct 5 05:30:19 localhost systemd[217613]: Reached target Sockets. Oct 5 05:30:19 localhost systemd[217613]: Reached target Basic System. Oct 5 05:30:19 localhost systemd[1]: Started User Manager for UID 0. Oct 5 05:30:19 localhost systemd[217613]: Reached target Main User Target. Oct 5 05:30:19 localhost systemd[217613]: Startup finished in 124ms. Oct 5 05:30:19 localhost systemd[1]: Started Session c14 of User root. Oct 5 05:30:19 localhost iscsid[217591]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:30:19 localhost iscsid[217591]: INFO:__main__:Validating config file Oct 5 05:30:19 localhost iscsid[217591]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:30:19 localhost iscsid[217591]: INFO:__main__:Writing out command to execute Oct 5 05:30:19 localhost systemd[1]: session-c14.scope: Deactivated successfully. Oct 5 05:30:19 localhost iscsid[217591]: ++ cat /run_command Oct 5 05:30:19 localhost iscsid[217591]: + CMD='/usr/sbin/iscsid -f' Oct 5 05:30:19 localhost iscsid[217591]: + ARGS= Oct 5 05:30:19 localhost iscsid[217591]: + sudo kolla_copy_cacerts Oct 5 05:30:19 localhost systemd[1]: Started Session c15 of User root. Oct 5 05:30:19 localhost systemd[1]: session-c15.scope: Deactivated successfully. Oct 5 05:30:19 localhost iscsid[217591]: + [[ ! -n '' ]] Oct 5 05:30:19 localhost iscsid[217591]: + . kolla_extend_start Oct 5 05:30:19 localhost iscsid[217591]: Running command: '/usr/sbin/iscsid -f' Oct 5 05:30:19 localhost iscsid[217591]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]] Oct 5 05:30:19 localhost iscsid[217591]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\''' Oct 5 05:30:19 localhost iscsid[217591]: + umask 0022 Oct 5 05:30:19 localhost iscsid[217591]: + exec /usr/sbin/iscsid -f Oct 5 05:30:19 localhost python3.9[217746]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:19 localhost systemd[1]: tmp-crun.OKSENW.mount: Deactivated successfully. Oct 5 05:30:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48583 DF PROTO=TCP SPT=34150 DPT=9100 SEQ=1819679108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3319D0000000001030307) Oct 5 05:30:20 localhost python3.9[217856]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:30:20.425 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:30:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:30:20.426 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:30:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:30:20.426 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:30:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46691 DF PROTO=TCP SPT=50820 DPT=9102 SEQ=778214288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE333DD0000000001030307) Oct 5 05:30:21 localhost python3.9[217966]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:30:21 localhost network[217983]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:30:21 localhost network[217984]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:30:21 localhost network[217985]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:30:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3494 DF PROTO=TCP SPT=45136 DPT=9105 SEQ=3967337929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE339CF0000000001030307) Oct 5 05:30:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58933 DF PROTO=TCP SPT=54628 DPT=9882 SEQ=1823576114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3439D0000000001030307) Oct 5 05:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:30:25 localhost podman[218091]: 2025-10-05 09:30:25.277931173 +0000 UTC m=+0.092114479 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 5 05:30:25 localhost podman[218091]: 2025-10-05 09:30:25.324789133 +0000 UTC m=+0.138972429 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:30:25 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:30:26 localhost python3.9[218243]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:30:27 localhost python3.9[218353]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Oct 5 05:30:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:30:27 localhost podman[218429]: 2025-10-05 09:30:27.93602124 +0000 UTC m=+0.086691167 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:30:27 localhost podman[218429]: 2025-10-05 09:30:27.969874669 +0000 UTC m=+0.120544616 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:30:27 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:30:28 localhost python3.9[218485]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58934 DF PROTO=TCP SPT=54628 DPT=9882 SEQ=1823576114 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3535E0000000001030307) Oct 5 05:30:28 localhost python3.9[218573]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656627.6739476-1256-253481459594407/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:29 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:30:29 localhost systemd[217613]: Activating special unit Exit the Session... Oct 5 05:30:29 localhost systemd[217613]: Stopped target Main User Target. Oct 5 05:30:29 localhost systemd[217613]: Stopped target Basic System. Oct 5 05:30:29 localhost systemd[217613]: Stopped target Paths. Oct 5 05:30:29 localhost systemd[217613]: Stopped target Sockets. Oct 5 05:30:29 localhost systemd[217613]: Stopped target Timers. Oct 5 05:30:29 localhost systemd[217613]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:30:29 localhost systemd[217613]: Closed D-Bus User Message Bus Socket. Oct 5 05:30:29 localhost systemd[217613]: Stopped Create User's Volatile Files and Directories. Oct 5 05:30:29 localhost systemd[217613]: Removed slice User Application Slice. Oct 5 05:30:29 localhost systemd[217613]: Reached target Shutdown. Oct 5 05:30:29 localhost systemd[217613]: Finished Exit the Session. Oct 5 05:30:29 localhost systemd[217613]: Reached target Exit the Session. Oct 5 05:30:29 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:30:29 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:30:29 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:30:29 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:30:29 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:30:29 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:30:29 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:30:29 localhost python3.9[218684]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:30 localhost python3.9[218794]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:30:30 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 05:30:30 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 05:30:30 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 05:30:30 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 05:30:30 localhost systemd-modules-load[218798]: Module 'msr' is built in Oct 5 05:30:30 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 05:30:31 localhost python3.9[218908]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:32 localhost python3.9[219018]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:33 localhost python3.9[219128]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:33 localhost python3.9[219238]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:34 localhost python3.9[219362]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656633.2856956-1430-98753195616036/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:34 localhost podman[219467]: 2025-10-05 09:30:34.746615473 +0000 UTC m=+0.088071934 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.expose-services=, distribution-scope=public) Oct 5 05:30:34 localhost podman[219467]: 2025-10-05 09:30:34.869816147 +0000 UTC m=+0.211272638 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container) Oct 5 05:30:35 localhost python3.9[219625]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:30:37 localhost python3.9[219808]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:38 localhost python3.9[219918]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:38 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Oct 5 05:30:38 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:30:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:30:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:30:39 localhost python3.9[220029]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:39 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Oct 5 05:30:40 localhost python3.9[220140]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:40 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Oct 5 05:30:40 localhost python3.9[220250]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:41 localhost python3.9[220361]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:42 localhost python3.9[220471]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:43 localhost python3.9[220581]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62462 DF PROTO=TCP SPT=42718 DPT=9102 SEQ=3901407516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE38D3D0000000001030307) Oct 5 05:30:43 localhost python3.9[220693]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62463 DF PROTO=TCP SPT=42718 DPT=9102 SEQ=3901407516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3915D0000000001030307) Oct 5 05:30:44 localhost python3.9[220803]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:45 localhost python3.9[220913]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:45 localhost python3.9[220970]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:46 localhost python3.9[221080]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62464 DF PROTO=TCP SPT=42718 DPT=9102 SEQ=3901407516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3995D0000000001030307) Oct 5 05:30:46 localhost python3.9[221137]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29960 DF PROTO=TCP SPT=48556 DPT=9100 SEQ=1527912897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE39ACD0000000001030307) Oct 5 05:30:47 localhost python3.9[221247]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29961 DF PROTO=TCP SPT=48556 DPT=9100 SEQ=1527912897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE39EDD0000000001030307) Oct 5 05:30:48 localhost python3.9[221357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:30:49 localhost systemd[1]: tmp-crun.yw9IAB.mount: Deactivated successfully. Oct 5 05:30:49 localhost podman[221415]: 2025-10-05 09:30:49.344654371 +0000 UTC m=+0.090529098 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:30:49 localhost podman[221415]: 2025-10-05 09:30:49.353886293 +0000 UTC m=+0.099760950 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:30:49 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:30:49 localhost python3.9[221414]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29962 DF PROTO=TCP SPT=48556 DPT=9100 SEQ=1527912897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3A6DD0000000001030307) Oct 5 05:30:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62465 DF PROTO=TCP SPT=42718 DPT=9102 SEQ=3901407516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3A91D0000000001030307) Oct 5 05:30:50 localhost python3.9[221543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61337 DF PROTO=TCP SPT=34236 DPT=9105 SEQ=28181695 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3AEFF0000000001030307) Oct 5 05:30:52 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 5 05:30:52 localhost python3.9[221600]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:53 localhost python3.9[221711]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:30:53 localhost systemd[1]: Reloading. Oct 5 05:30:53 localhost systemd-rc-local-generator[221733]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:53 localhost systemd-sysv-generator[221739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:54 localhost python3.9[221859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8703 DF PROTO=TCP SPT=40104 DPT=9882 SEQ=176617256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3B8DD0000000001030307) Oct 5 05:30:54 localhost python3.9[221916]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:55 localhost python3.9[222026]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:30:55 localhost podman[222078]: 2025-10-05 09:30:55.689437995 +0000 UTC m=+0.096836993 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 5 05:30:55 localhost podman[222078]: 2025-10-05 09:30:55.731878449 +0000 UTC m=+0.139277447 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:30:55 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:30:55 localhost python3.9[222094]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:56 localhost python3.9[222218]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:30:56 localhost systemd[1]: Reloading. Oct 5 05:30:56 localhost systemd-sysv-generator[222249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:56 localhost systemd-rc-local-generator[222242]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:56 localhost systemd[1]: Starting Create netns directory... Oct 5 05:30:56 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:30:56 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:30:56 localhost systemd[1]: Finished Create netns directory. Oct 5 05:30:57 localhost python3.9[222372]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:30:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8704 DF PROTO=TCP SPT=40104 DPT=9882 SEQ=176617256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE3C89E0000000001030307) Oct 5 05:30:58 localhost systemd[1]: tmp-crun.TVTKSs.mount: Deactivated successfully. Oct 5 05:30:58 localhost podman[222461]: 2025-10-05 09:30:58.685094593 +0000 UTC m=+0.090955739 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:30:58 localhost podman[222461]: 2025-10-05 09:30:58.693684148 +0000 UTC m=+0.099545294 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:30:58 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:30:59 localhost python3.9[222499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:59 localhost python3.9[222587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656658.2018113-2051-266698982149447/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:31:01 localhost python3.9[222697]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:31:02 localhost python3.9[222807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:31:02 localhost python3.9[222895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656661.7490623-2126-83044017010483/.source.json _original_basename=.1810i988 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:03 localhost python3.9[223005]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:05 localhost python3.9[223313]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Oct 5 05:31:06 localhost python3.9[223423]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:31:07 localhost python3.9[223533]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:31:12 localhost python3[223670]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:31:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51616 DF PROTO=TCP SPT=46370 DPT=9102 SEQ=3948121944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4026D0000000001030307) Oct 5 05:31:13 localhost podman[223684]: 2025-10-05 09:31:12.118749515 +0000 UTC m=+0.039055537 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 5 05:31:13 localhost podman[223732]: Oct 5 05:31:14 localhost podman[223732]: 2025-10-05 09:31:14.004569399 +0000 UTC m=+0.084581581 container create efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 05:31:14 localhost podman[223732]: 2025-10-05 09:31:13.962134145 +0000 UTC m=+0.042146367 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 5 05:31:14 localhost python3[223670]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 5 05:31:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51617 DF PROTO=TCP SPT=46370 DPT=9102 SEQ=3948121944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4065D0000000001030307) Oct 5 05:31:14 localhost python3.9[223878]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:31:15 localhost python3.9[223990]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:16 localhost python3.9[224045]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:31:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51618 DF PROTO=TCP SPT=46370 DPT=9102 SEQ=3948121944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE40E5D0000000001030307) Oct 5 05:31:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49023 DF PROTO=TCP SPT=35780 DPT=9100 SEQ=1620499128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE40FFD0000000001030307) Oct 5 05:31:17 localhost python3.9[224154]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656676.3744943-2390-3675888895332/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:17 localhost python3.9[224209]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:31:17 localhost systemd[1]: Reloading. Oct 5 05:31:17 localhost systemd-rc-local-generator[224234]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:17 localhost systemd-sysv-generator[224238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49024 DF PROTO=TCP SPT=35780 DPT=9100 SEQ=1620499128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4141D0000000001030307) Oct 5 05:31:18 localhost python3.9[224300]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:18 localhost systemd[1]: Reloading. Oct 5 05:31:18 localhost systemd-rc-local-generator[224326]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:18 localhost systemd-sysv-generator[224331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:18 localhost systemd[1]: Starting multipathd container... Oct 5 05:31:19 localhost systemd[1]: Started libcrun container. Oct 5 05:31:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5adb2994ecccb70061160b5b6f230d1ab00b9a53c944832153f6296c9506d6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5adb2994ecccb70061160b5b6f230d1ab00b9a53c944832153f6296c9506d6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:31:19 localhost podman[224341]: 2025-10-05 09:31:19.144081025 +0000 UTC m=+0.160311239 container init efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:31:19 localhost multipathd[224357]: + sudo -E kolla_set_configs Oct 5 05:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:31:19 localhost podman[224341]: 2025-10-05 09:31:19.218718854 +0000 UTC m=+0.234949038 container start efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible) Oct 5 05:31:19 localhost podman[224341]: multipathd Oct 5 05:31:19 localhost systemd[1]: Started multipathd container. Oct 5 05:31:19 localhost multipathd[224357]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:31:19 localhost multipathd[224357]: INFO:__main__:Validating config file Oct 5 05:31:19 localhost multipathd[224357]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:31:19 localhost multipathd[224357]: INFO:__main__:Writing out command to execute Oct 5 05:31:19 localhost multipathd[224357]: ++ cat /run_command Oct 5 05:31:19 localhost multipathd[224357]: + CMD='/usr/sbin/multipathd -d' Oct 5 05:31:19 localhost multipathd[224357]: + ARGS= Oct 5 05:31:19 localhost multipathd[224357]: + sudo kolla_copy_cacerts Oct 5 05:31:19 localhost multipathd[224357]: + [[ ! -n '' ]] Oct 5 05:31:19 localhost multipathd[224357]: + . kolla_extend_start Oct 5 05:31:19 localhost multipathd[224357]: Running command: '/usr/sbin/multipathd -d' Oct 5 05:31:19 localhost multipathd[224357]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Oct 5 05:31:19 localhost multipathd[224357]: + umask 0022 Oct 5 05:31:19 localhost multipathd[224357]: + exec /usr/sbin/multipathd -d Oct 5 05:31:19 localhost multipathd[224357]: 10158.517119 | --------start up-------- Oct 5 05:31:19 localhost multipathd[224357]: 10158.517141 | read /etc/multipath.conf Oct 5 05:31:19 localhost multipathd[224357]: 10158.522033 | path checkers start up Oct 5 05:31:19 localhost podman[224366]: 2025-10-05 09:31:19.300344447 +0000 UTC m=+0.072379341 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 05:31:19 localhost podman[224366]: 2025-10-05 09:31:19.311702465 +0000 UTC m=+0.083737339 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:31:19 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:31:19 localhost podman[224464]: 2025-10-05 09:31:19.662181715 +0000 UTC m=+0.071821976 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:31:19 localhost podman[224464]: 2025-10-05 09:31:19.67114128 +0000 UTC m=+0.080781531 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:31:19 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:31:19 localhost python3.9[224520]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:31:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49025 DF PROTO=TCP SPT=35780 DPT=9100 SEQ=1620499128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE41C1E0000000001030307) Oct 5 05:31:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:31:20.426 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:31:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:31:20.427 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:31:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:31:20.429 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:31:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51619 DF PROTO=TCP SPT=46370 DPT=9102 SEQ=3948121944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE41E1D0000000001030307) Oct 5 05:31:20 localhost python3.9[224632]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:31:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6551 DF PROTO=TCP SPT=54860 DPT=9105 SEQ=3096763920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4242F0000000001030307) Oct 5 05:31:22 localhost python3.9[224755]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:31:22 localhost systemd[1]: Stopping multipathd container... Oct 5 05:31:22 localhost systemd[1]: tmp-crun.EySCSw.mount: Deactivated successfully. Oct 5 05:31:22 localhost multipathd[224357]: 10161.442313 | exit (signal) Oct 5 05:31:22 localhost multipathd[224357]: 10161.442806 | --------shut down------- Oct 5 05:31:22 localhost systemd[1]: libpod-efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.scope: Deactivated successfully. Oct 5 05:31:22 localhost podman[224759]: 2025-10-05 09:31:22.25050286 +0000 UTC m=+0.109824314 container died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:31:22 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.timer: Deactivated successfully. Oct 5 05:31:22 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:31:22 localhost podman[224759]: 2025-10-05 09:31:22.423959514 +0000 UTC m=+0.283280938 container cleanup efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true) Oct 5 05:31:22 localhost podman[224759]: multipathd Oct 5 05:31:22 localhost podman[224787]: 2025-10-05 09:31:22.521222937 +0000 UTC m=+0.060347405 container cleanup efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:31:22 localhost podman[224787]: multipathd Oct 5 05:31:22 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Oct 5 05:31:22 localhost systemd[1]: Stopped multipathd container. Oct 5 05:31:22 localhost systemd[1]: Starting multipathd container... Oct 5 05:31:22 localhost systemd[1]: Started libcrun container. Oct 5 05:31:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5adb2994ecccb70061160b5b6f230d1ab00b9a53c944832153f6296c9506d6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5adb2994ecccb70061160b5b6f230d1ab00b9a53c944832153f6296c9506d6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:31:22 localhost podman[224800]: 2025-10-05 09:31:22.694040093 +0000 UTC m=+0.136282698 container init efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:31:22 localhost multipathd[224814]: + sudo -E kolla_set_configs Oct 5 05:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:31:22 localhost podman[224800]: 2025-10-05 09:31:22.732338139 +0000 UTC m=+0.174580734 container start efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:31:22 localhost podman[224800]: multipathd Oct 5 05:31:22 localhost systemd[1]: Started multipathd container. Oct 5 05:31:22 localhost multipathd[224814]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:31:22 localhost multipathd[224814]: INFO:__main__:Validating config file Oct 5 05:31:22 localhost multipathd[224814]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:31:22 localhost multipathd[224814]: INFO:__main__:Writing out command to execute Oct 5 05:31:22 localhost multipathd[224814]: ++ cat /run_command Oct 5 05:31:22 localhost multipathd[224814]: + CMD='/usr/sbin/multipathd -d' Oct 5 05:31:22 localhost multipathd[224814]: + ARGS= Oct 5 05:31:22 localhost multipathd[224814]: + sudo kolla_copy_cacerts Oct 5 05:31:22 localhost multipathd[224814]: + [[ ! -n '' ]] Oct 5 05:31:22 localhost multipathd[224814]: + . kolla_extend_start Oct 5 05:31:22 localhost multipathd[224814]: Running command: '/usr/sbin/multipathd -d' Oct 5 05:31:22 localhost multipathd[224814]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Oct 5 05:31:22 localhost multipathd[224814]: + umask 0022 Oct 5 05:31:22 localhost multipathd[224814]: + exec /usr/sbin/multipathd -d Oct 5 05:31:22 localhost podman[224822]: 2025-10-05 09:31:22.827467946 +0000 UTC m=+0.092566351 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:31:22 localhost multipathd[224814]: 10162.055716 | --------start up-------- Oct 5 05:31:22 localhost multipathd[224814]: 10162.055738 | read /etc/multipath.conf Oct 5 05:31:22 localhost multipathd[224814]: 10162.059584 | path checkers start up Oct 5 05:31:22 localhost podman[224822]: 2025-10-05 09:31:22.844201755 +0000 UTC m=+0.109300220 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:31:22 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:31:23 localhost systemd[1]: tmp-crun.E2LxYa.mount: Deactivated successfully. Oct 5 05:31:24 localhost python3.9[224961]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5616 DF PROTO=TCP SPT=49854 DPT=9882 SEQ=2944864060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE42DDD0000000001030307) Oct 5 05:31:25 localhost python3.9[225071]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:31:25 localhost systemd[1]: tmp-crun.qMp5TO.mount: Deactivated successfully. Oct 5 05:31:25 localhost podman[225182]: 2025-10-05 09:31:25.973135001 +0000 UTC m=+0.086989634 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:31:26 localhost podman[225182]: 2025-10-05 09:31:26.010937903 +0000 UTC m=+0.124792586 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:31:26 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:31:26 localhost python3.9[225181]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Oct 5 05:31:26 localhost python3.9[225324]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:31:27 localhost python3.9[225412]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656686.366227-2630-196821968813960/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:28 localhost python3.9[225522]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5617 DF PROTO=TCP SPT=49854 DPT=9882 SEQ=2944864060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE43D9E0000000001030307) Oct 5 05:31:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:31:28 localhost systemd[1]: tmp-crun.wOFbVf.mount: Deactivated successfully. Oct 5 05:31:28 localhost podman[225633]: 2025-10-05 09:31:28.882698898 +0000 UTC m=+0.090612526 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:31:28 localhost podman[225633]: 2025-10-05 09:31:28.887935875 +0000 UTC m=+0.095849523 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:31:28 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:31:29 localhost python3.9[225632]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:31:29 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 05:31:29 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 05:31:29 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 05:31:29 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 05:31:29 localhost systemd-modules-load[225653]: Module 'msr' is built in Oct 5 05:31:29 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 05:31:30 localhost python3.9[225763]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:31:31 localhost python3.9[225826]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:31:38 localhost systemd[1]: Reloading. Oct 5 05:31:38 localhost systemd-sysv-generator[225951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:38 localhost systemd-rc-local-generator[225946]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:39 localhost systemd[1]: Reloading. Oct 5 05:31:39 localhost systemd-rc-local-generator[225983]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:39 localhost systemd-sysv-generator[225986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:39 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Oct 5 05:31:39 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 5 05:31:39 localhost lvm[226035]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 05:31:39 localhost lvm[226035]: VG ceph_vg1 finished Oct 5 05:31:39 localhost lvm[226036]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 05:31:39 localhost lvm[226036]: VG ceph_vg0 finished Oct 5 05:31:39 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:31:39 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 05:31:39 localhost systemd[1]: Reloading. Oct 5 05:31:39 localhost systemd-sysv-generator[226089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:39 localhost systemd-rc-local-generator[226085]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:40 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 05:31:41 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 05:31:41 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 05:31:41 localhost systemd[1]: man-db-cache-update.service: Consumed 1.391s CPU time. Oct 5 05:31:41 localhost systemd[1]: run-r722cb262ac124259bfb8ccdd1e88abb0.service: Deactivated successfully. Oct 5 05:31:41 localhost python3.9[227332]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:42 localhost python3.9[227440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:31:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6547 DF PROTO=TCP SPT=47522 DPT=9102 SEQ=3062437730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4779D0000000001030307) Oct 5 05:31:43 localhost python3.9[227554]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6548 DF PROTO=TCP SPT=47522 DPT=9102 SEQ=3062437730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE47B9D0000000001030307) Oct 5 05:31:45 localhost python3.9[227664]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:31:45 localhost systemd[1]: Reloading. Oct 5 05:31:45 localhost systemd-rc-local-generator[227689]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:45 localhost systemd-sysv-generator[227692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:46 localhost python3.9[227808]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:31:46 localhost network[227825]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:31:46 localhost network[227826]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:31:46 localhost network[227827]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6549 DF PROTO=TCP SPT=47522 DPT=9102 SEQ=3062437730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4839D0000000001030307) Oct 5 05:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6266 DF PROTO=TCP SPT=60504 DPT=9100 SEQ=2049288950 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4852C0000000001030307) Oct 5 05:31:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6267 DF PROTO=TCP SPT=60504 DPT=9100 SEQ=2049288950 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4891D0000000001030307) Oct 5 05:31:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6268 DF PROTO=TCP SPT=60504 DPT=9100 SEQ=2049288950 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4911D0000000001030307) Oct 5 05:31:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6550 DF PROTO=TCP SPT=47522 DPT=9102 SEQ=3062437730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4935D0000000001030307) Oct 5 05:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:31:50 localhost podman[227971]: 2025-10-05 09:31:50.677184921 +0000 UTC m=+0.083558152 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:31:50 localhost podman[227971]: 2025-10-05 09:31:50.712088839 +0000 UTC m=+0.118462050 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid) Oct 5 05:31:50 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:31:51 localhost python3.9[228083]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32058 DF PROTO=TCP SPT=43870 DPT=9105 SEQ=3597019457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4995F0000000001030307) Oct 5 05:31:52 localhost python3.9[228194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:53 localhost python3.9[228305]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:31:53 localhost podman[228307]: 2025-10-05 09:31:53.130227209 +0000 UTC m=+0.069308443 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:31:53 localhost podman[228307]: 2025-10-05 09:31:53.168998047 +0000 UTC m=+0.108079261 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:31:53 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:31:53 localhost python3.9[228435]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33798 DF PROTO=TCP SPT=41024 DPT=9882 SEQ=464254891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4A31E0000000001030307) Oct 5 05:31:54 localhost python3.9[228546]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:55 localhost python3.9[228657]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:31:56 localhost podman[228717]: 2025-10-05 09:31:56.68364205 +0000 UTC m=+0.087380352 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller) Oct 5 05:31:56 localhost podman[228717]: 2025-10-05 09:31:56.75285179 +0000 UTC m=+0.156590102 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:31:56 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:31:57 localhost python3.9[228793]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:58 localhost python3.9[228904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33799 DF PROTO=TCP SPT=41024 DPT=9882 SEQ=464254891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4B2DD0000000001030307) Oct 5 05:31:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:31:59 localhost systemd[1]: tmp-crun.o1A2gS.mount: Deactivated successfully. Oct 5 05:31:59 localhost podman[228977]: 2025-10-05 09:31:59.672763793 +0000 UTC m=+0.083514971 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:31:59 localhost podman[228977]: 2025-10-05 09:31:59.70686486 +0000 UTC m=+0.117616018 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 05:31:59 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:31:59 localhost python3.9[229035]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:00 localhost python3.9[229145]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:01 localhost python3.9[229255]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:01 localhost python3.9[229365]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:02 localhost python3.9[229475]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:03 localhost python3.9[229585]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:03 localhost python3.9[229695]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:04 localhost python3.9[229805]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:05 localhost python3.9[229915]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:05 localhost python3.9[230025]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:06 localhost python3.9[230135]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:06 localhost python3.9[230245]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:07 localhost python3.9[230355]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:07 localhost python3.9[230465]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:08 localhost python3.9[230575]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:09 localhost python3.9[230685]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:10 localhost python3.9[230795]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:12 localhost python3.9[230905]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:32:13 localhost python3.9[231015]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:32:13 localhost systemd[1]: Reloading. Oct 5 05:32:13 localhost systemd-rc-local-generator[231041]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:32:13 localhost systemd-sysv-generator[231044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:32:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22437 DF PROTO=TCP SPT=36792 DPT=9102 SEQ=2530244587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4ECCD0000000001030307) Oct 5 05:32:14 localhost python3.9[231162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22438 DF PROTO=TCP SPT=36792 DPT=9102 SEQ=2530244587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4F0DE0000000001030307) Oct 5 05:32:14 localhost python3.9[231273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:15 localhost python3.9[231384]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:15 localhost python3.9[231495]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:16 localhost python3.9[231606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22439 DF PROTO=TCP SPT=36792 DPT=9102 SEQ=2530244587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4F8DE0000000001030307) Oct 5 05:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44306 DF PROTO=TCP SPT=33244 DPT=9100 SEQ=2787642663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4FA5C0000000001030307) Oct 5 05:32:17 localhost python3.9[231717]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:17 localhost python3.9[231828]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44307 DF PROTO=TCP SPT=33244 DPT=9100 SEQ=2787642663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE4FE5D0000000001030307) Oct 5 05:32:18 localhost python3.9[231939]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44308 DF PROTO=TCP SPT=33244 DPT=9100 SEQ=2787642663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5065D0000000001030307) Oct 5 05:32:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:32:20.428 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:32:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:32:20.429 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:32:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:32:20.430 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:32:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22440 DF PROTO=TCP SPT=36792 DPT=9102 SEQ=2530244587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5089E0000000001030307) Oct 5 05:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:32:21 localhost podman[231958]: 2025-10-05 09:32:21.679995336 +0000 UTC m=+0.088516351 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:32:21 localhost podman[231958]: 2025-10-05 09:32:21.689074003 +0000 UTC m=+0.097595068 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:32:21 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:32:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50203 DF PROTO=TCP SPT=60260 DPT=9105 SEQ=2278982354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE50E8F0000000001030307) Oct 5 05:32:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:32:23 localhost podman[232071]: 2025-10-05 09:32:23.655440408 +0000 UTC m=+0.080079692 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:32:23 localhost podman[232071]: 2025-10-05 09:32:23.667611905 +0000 UTC m=+0.092251159 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:32:23 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:32:23 localhost python3.9[232070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:24 localhost python3.9[232197]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27771 DF PROTO=TCP SPT=51884 DPT=9882 SEQ=2699735254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5185D0000000001030307) Oct 5 05:32:24 localhost python3.9[232307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:25 localhost python3.9[232417]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:26 localhost python3.9[232527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:26 localhost python3.9[232637]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:32:27 localhost systemd[1]: tmp-crun.eQcbOK.mount: Deactivated successfully. Oct 5 05:32:27 localhost podman[232748]: 2025-10-05 09:32:27.440826261 +0000 UTC m=+0.089206360 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 05:32:27 localhost podman[232748]: 2025-10-05 09:32:27.474748992 +0000 UTC m=+0.123129091 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:32:27 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:32:27 localhost python3.9[232747]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:28 localhost python3.9[232882]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27772 DF PROTO=TCP SPT=51884 DPT=9882 SEQ=2699735254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5281D0000000001030307) Oct 5 05:32:28 localhost python3.9[232992]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:29 localhost python3.9[233102]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:32:29 localhost systemd[1]: tmp-crun.SsOWzQ.mount: Deactivated successfully. Oct 5 05:32:29 localhost podman[233213]: 2025-10-05 09:32:29.879630627 +0000 UTC m=+0.083955643 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:32:29 localhost podman[233213]: 2025-10-05 09:32:29.88471206 +0000 UTC m=+0.089037086 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Oct 5 05:32:29 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:32:30 localhost python3.9[233212]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:31 localhost python3.9[233341]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:38 localhost python3.9[233487]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Oct 5 05:32:39 localhost python3.9[233631]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:32:40 localhost python3.9[233765]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471150.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 5 05:32:41 localhost sshd[233791]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:32:41 localhost systemd-logind[760]: New session 57 of user zuul. Oct 5 05:32:41 localhost systemd[1]: Started Session 57 of User zuul. Oct 5 05:32:41 localhost systemd[1]: session-57.scope: Deactivated successfully. Oct 5 05:32:41 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Oct 5 05:32:41 localhost systemd-logind[760]: Removed session 57. Oct 5 05:32:42 localhost python3.9[233902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:43 localhost python3.9[233988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656762.48744-4267-139447177302266/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48316 DF PROTO=TCP SPT=41672 DPT=9102 SEQ=3815254837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE561FD0000000001030307) Oct 5 05:32:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48317 DF PROTO=TCP SPT=41672 DPT=9102 SEQ=3815254837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5661D0000000001030307) Oct 5 05:32:44 localhost python3.9[234096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:45 localhost python3.9[234151]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:45 localhost python3.9[234259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:46 localhost python3.9[234345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656765.1700354-4267-249821145028824/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48318 DF PROTO=TCP SPT=41672 DPT=9102 SEQ=3815254837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE56E1D0000000001030307) Oct 5 05:32:46 localhost python3.9[234453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51442 DF PROTO=TCP SPT=54232 DPT=9100 SEQ=2489911313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE56F8D0000000001030307) Oct 5 05:32:47 localhost python3.9[234539]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656766.273323-4267-106413220667904/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=7ac938a335a3d7c35e640d8a23d0622f34c4ef39 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:47 localhost python3.9[234647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51443 DF PROTO=TCP SPT=54232 DPT=9100 SEQ=2489911313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5739D0000000001030307) Oct 5 05:32:48 localhost python3.9[234733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656767.3892043-4267-167299875732417/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:49 localhost python3.9[234843]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51444 DF PROTO=TCP SPT=54232 DPT=9100 SEQ=2489911313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE57B9D0000000001030307) Oct 5 05:32:50 localhost python3.9[234953]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48319 DF PROTO=TCP SPT=41672 DPT=9102 SEQ=3815254837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE57DDE0000000001030307) Oct 5 05:32:50 localhost python3.9[235063]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:32:51 localhost python3.9[235175]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35748 DF PROTO=TCP SPT=52454 DPT=9105 SEQ=96122320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE583BF0000000001030307) Oct 5 05:32:52 localhost python3.9[235283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:32:52 localhost podman[235286]: 2025-10-05 09:32:52.689521707 +0000 UTC m=+0.096422549 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:32:52 localhost podman[235286]: 2025-10-05 09:32:52.705348456 +0000 UTC m=+0.112249328 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 05:32:52 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:32:53 localhost python3.9[235412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:54 localhost python3.9[235498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656773.3233938-4600-104388431930506/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10914 DF PROTO=TCP SPT=33874 DPT=9882 SEQ=4222581407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE58D9D0000000001030307) Oct 5 05:32:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:32:54 localhost podman[235499]: 2025-10-05 09:32:54.675037662 +0000 UTC m=+0.083626521 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd) Oct 5 05:32:54 localhost podman[235499]: 2025-10-05 09:32:54.690916644 +0000 UTC m=+0.099505493 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:32:54 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:32:55 localhost python3.9[235625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:55 localhost python3.9[235711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656774.903024-4645-131174198973116/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:56 localhost python3.9[235821]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Oct 5 05:32:57 localhost python3.9[235931]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:32:57 localhost systemd[1]: tmp-crun.lb8xLy.mount: Deactivated successfully. Oct 5 05:32:57 localhost podman[235945]: 2025-10-05 09:32:57.69597355 +0000 UTC m=+0.098300854 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:32:57 localhost podman[235945]: 2025-10-05 09:32:57.740735063 +0000 UTC m=+0.143062357 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:32:57 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:32:58 localhost python3[236066]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:32:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10915 DF PROTO=TCP SPT=33874 DPT=9882 SEQ=4222581407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE59D5D0000000001030307) Oct 5 05:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:33:00 localhost podman[236093]: 2025-10-05 09:33:00.642945504 +0000 UTC m=+0.053216262 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:33:00 localhost podman[236093]: 2025-10-05 09:33:00.650560192 +0000 UTC m=+0.060830950 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:33:00 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:33:08 localhost podman[236080]: 2025-10-05 09:32:58.714079033 +0000 UTC m=+0.047123663 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 5 05:33:09 localhost podman[236159]: Oct 5 05:33:09 localhost podman[236159]: 2025-10-05 09:33:09.088308849 +0000 UTC m=+0.046181760 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 5 05:33:09 localhost podman[236159]: 2025-10-05 09:33:09.201371505 +0000 UTC m=+0.159244356 container create e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Oct 5 05:33:09 localhost python3[236066]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Oct 5 05:33:10 localhost python3.9[236304]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:11 localhost python3.9[236416]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Oct 5 05:33:12 localhost python3.9[236526]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:33:13 localhost python3[236636]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:33:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=750 DF PROTO=TCP SPT=53414 DPT=9102 SEQ=618497113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5D72E0000000001030307) Oct 5 05:33:13 localhost python3[236636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0d460c957a79c0fa941447cb00e5ab934f0ccc1442862d4e417ff427bd26aed9",#012 "Digest": "sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:32:21.432647731Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207527293,#012 "VirtualSize": 1207527293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:6a39f36d67f67acbd99daa43f5f54c2ceabda19dd25b824285c9338b74a7494e",#012 "sha256:9a26e1dd0ae990be1ae7a87aaaac389265f77f7100ea3ac633d95d89956449a4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 5 05:33:13 localhost podman[236685]: 2025-10-05 09:33:13.610263182 +0000 UTC m=+0.089453435 container remove 5b004080be7323a32392ae8c7641c7f858b1bdfdcc193229502f45fc25478424 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, config_id=tripleo_step5, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'bfafc2f71ef1d8535e7a88ec76ac5234-012327e9705c184cfee14ca411150d67'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:33:13 localhost python3[236636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Oct 5 05:33:13 localhost podman[236699]: Oct 5 05:33:13 localhost podman[236699]: 2025-10-05 09:33:13.715979937 +0000 UTC m=+0.085760984 container create dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 05:33:13 localhost podman[236699]: 2025-10-05 09:33:13.676558066 +0000 UTC m=+0.046339113 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 5 05:33:13 localhost python3[236636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Oct 5 05:33:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=751 DF PROTO=TCP SPT=53414 DPT=9102 SEQ=618497113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5DB1E0000000001030307) Oct 5 05:33:15 localhost python3.9[236847]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:16 localhost python3.9[236959]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=752 DF PROTO=TCP SPT=53414 DPT=9102 SEQ=618497113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5E31E0000000001030307) Oct 5 05:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3118 DF PROTO=TCP SPT=59370 DPT=9100 SEQ=613144447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5E4BC0000000001030307) Oct 5 05:33:17 localhost python3.9[237068]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656796.175839-4921-193054912943526/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3119 DF PROTO=TCP SPT=59370 DPT=9100 SEQ=613144447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5E8DD0000000001030307) Oct 5 05:33:17 localhost python3.9[237123]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:33:17 localhost systemd[1]: Reloading. Oct 5 05:33:18 localhost systemd-rc-local-generator[237151]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:18 localhost systemd-sysv-generator[237155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:18 localhost python3.9[237214]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:33:19 localhost systemd[1]: Reloading. Oct 5 05:33:19 localhost systemd-sysv-generator[237245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:19 localhost systemd-rc-local-generator[237242]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:19 localhost systemd[1]: Starting nova_compute container... Oct 5 05:33:19 localhost systemd[1]: Started libcrun container. Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost podman[237254]: 2025-10-05 09:33:19.45513142 +0000 UTC m=+0.114520804 container init dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Oct 5 05:33:19 localhost podman[237254]: 2025-10-05 09:33:19.46567454 +0000 UTC m=+0.125063914 container start dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:33:19 localhost podman[237254]: nova_compute Oct 5 05:33:19 localhost systemd[1]: Started nova_compute container. Oct 5 05:33:19 localhost nova_compute[237268]: + sudo -E kolla_set_configs Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Validating config file Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying service configuration files Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Deleting /etc/ceph Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Creating directory /etc/ceph Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/ceph Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Writing out command to execute Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:19 localhost nova_compute[237268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[237268]: ++ cat /run_command Oct 5 05:33:19 localhost nova_compute[237268]: + CMD=nova-compute Oct 5 05:33:19 localhost nova_compute[237268]: + ARGS= Oct 5 05:33:19 localhost nova_compute[237268]: + sudo kolla_copy_cacerts Oct 5 05:33:19 localhost nova_compute[237268]: Running command: 'nova-compute' Oct 5 05:33:19 localhost nova_compute[237268]: + [[ ! -n '' ]] Oct 5 05:33:19 localhost nova_compute[237268]: + . kolla_extend_start Oct 5 05:33:19 localhost nova_compute[237268]: + echo 'Running command: '\''nova-compute'\''' Oct 5 05:33:19 localhost nova_compute[237268]: + umask 0022 Oct 5 05:33:19 localhost nova_compute[237268]: + exec nova-compute Oct 5 05:33:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3120 DF PROTO=TCP SPT=59370 DPT=9100 SEQ=613144447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5F0DD0000000001030307) Oct 5 05:33:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:33:20.428 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:33:20.429 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:33:20.430 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=753 DF PROTO=TCP SPT=53414 DPT=9102 SEQ=618497113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5F2DD0000000001030307) Oct 5 05:33:20 localhost python3.9[237388]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.196 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.197 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.197 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.197 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.307 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.329 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:21 localhost python3.9[237500]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.851 2 INFO nova.virt.driver [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.968 2 INFO nova.compute.provider_config [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.983 2 WARNING nova.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.983 2 DEBUG oslo_concurrency.lockutils [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.984 2 DEBUG oslo_concurrency.lockutils [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.984 2 DEBUG oslo_concurrency.lockutils [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.984 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.984 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.985 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.985 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.985 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.985 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.985 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.986 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.986 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.986 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.986 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.986 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.986 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.987 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.987 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.987 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.987 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.987 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.988 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.988 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] console_host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.988 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.988 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.988 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.988 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.989 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.989 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.989 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.989 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.989 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.990 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.990 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.990 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.990 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.990 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.991 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.991 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.991 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.991 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.991 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.992 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.992 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.992 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.992 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.992 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.993 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.993 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.993 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.993 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.993 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.994 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.994 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.994 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.994 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.994 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.995 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.995 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.995 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.995 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.995 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.996 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.996 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.996 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.996 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.996 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.997 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.997 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.997 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.997 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.997 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.997 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.998 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.998 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.998 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.998 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.998 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.999 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.999 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.999 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:21 localhost nova_compute[237268]: 2025-10-05 09:33:21.999 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:21.999 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.000 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.000 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.000 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.000 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.000 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.001 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.001 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.001 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.001 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.001 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.002 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.002 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.002 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.002 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.002 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.003 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.003 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.003 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.003 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.003 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.003 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.004 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.004 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.004 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.004 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.004 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.005 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.005 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.005 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.005 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.005 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.005 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.006 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.006 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.006 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.006 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.006 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.007 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.007 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.007 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.007 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.007 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.007 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.008 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.008 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.008 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.008 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.008 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.009 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.009 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.009 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.009 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.009 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.009 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.010 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.010 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.010 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.010 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.010 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.011 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.011 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.011 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.011 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.011 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.011 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.012 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.012 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.012 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.012 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.012 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.012 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.013 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.014 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.015 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.016 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.016 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.016 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.016 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.016 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.016 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.017 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.018 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.019 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.020 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.021 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.022 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.023 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.024 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.025 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.026 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1456 DF PROTO=TCP SPT=52336 DPT=9105 SEQ=2151594192 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE5F8EF0000000001030307) Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.027 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.028 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.029 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.030 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.031 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.032 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.033 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.034 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.035 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.036 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.037 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.038 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.039 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.040 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.041 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.041 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.041 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.041 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.041 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.041 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.042 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.043 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.044 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.045 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.046 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.047 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.047 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.047 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.047 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.047 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.047 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.048 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.049 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.050 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.051 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.052 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.053 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.054 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.055 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.060 2 WARNING oslo_config.cfg [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 5 05:33:22 localhost nova_compute[237268]: live_migration_uri is deprecated for removal in favor of two other options that Oct 5 05:33:22 localhost nova_compute[237268]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 5 05:33:22 localhost nova_compute[237268]: and ``live_migration_inbound_addr`` respectively. Oct 5 05:33:22 localhost nova_compute[237268]: ). Its value may be silently ignored in the future.#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rbd_secret_uuid = 659062ac-50b4-5607-b699-3105da7f55ee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.121 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.144 2 INFO nova.virt.node [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Determined node identity 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from /var/lib/nova/compute_id#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.144 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.145 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.145 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.145 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.154 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.157 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.157 2 INFO nova.virt.libvirt.driver [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Connection event '1' reason 'None'#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.170 2 DEBUG nova.virt.libvirt.volume.mount [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.179 2 INFO nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Libvirt host capabilities Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 8a2ee9a2-7fe7-4677-a151-037462d3ba7a Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: x86_64 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v4 Oct 5 05:33:22 localhost nova_compute[237268]: AMD Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: tcp Oct 5 05:33:22 localhost nova_compute[237268]: rdma Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 16116612 Oct 5 05:33:22 localhost nova_compute[237268]: 4029153 Oct 5 05:33:22 localhost nova_compute[237268]: 0 Oct 5 05:33:22 localhost nova_compute[237268]: 0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: selinux Oct 5 05:33:22 localhost nova_compute[237268]: 0 Oct 5 05:33:22 localhost nova_compute[237268]: system_u:system_r:svirt_t:s0 Oct 5 05:33:22 localhost nova_compute[237268]: system_u:system_r:svirt_tcg_t:s0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: dac Oct 5 05:33:22 localhost nova_compute[237268]: 0 Oct 5 05:33:22 localhost nova_compute[237268]: +107:+107 Oct 5 05:33:22 localhost nova_compute[237268]: +107:+107 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: hvm Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 32 Oct 5 05:33:22 localhost nova_compute[237268]: /usr/libexec/qemu-kvm Oct 5 05:33:22 localhost nova_compute[237268]: pc-i440fx-rhel7.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: q35 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.4.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.5.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.3.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel7.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.4.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.2.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.2.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.0.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.0.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.1.0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: hvm Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 64 Oct 5 05:33:22 localhost nova_compute[237268]: /usr/libexec/qemu-kvm Oct 5 05:33:22 localhost nova_compute[237268]: pc-i440fx-rhel7.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: q35 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.4.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.5.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.3.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel7.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.4.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.2.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.2.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.0.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.0.0 Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel8.1.0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: #033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.189 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.209 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/libexec/qemu-kvm Oct 5 05:33:22 localhost nova_compute[237268]: kvm Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: i686 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: rom Oct 5 05:33:22 localhost nova_compute[237268]: pflash Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: yes Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: AMD Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 486 Oct 5 05:33:22 localhost nova_compute[237268]: 486-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Conroe Oct 5 05:33:22 localhost nova_compute[237268]: Conroe-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-IBPB Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v4 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v1 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v2 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v6 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v7 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Penryn Oct 5 05:33:22 localhost nova_compute[237268]: Penryn-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Westmere Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v2 Oct 5 05:33:22 localhost nova_compute[237268]: athlon Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: athlon-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: kvm32 Oct 5 05:33:22 localhost nova_compute[237268]: kvm32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: n270 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: n270-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pentium Oct 5 05:33:22 localhost nova_compute[237268]: pentium-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: phenom Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: phenom-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu32 Oct 5 05:33:22 localhost nova_compute[237268]: qemu32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: file Oct 5 05:33:22 localhost nova_compute[237268]: anonymous Oct 5 05:33:22 localhost nova_compute[237268]: memfd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: disk Oct 5 05:33:22 localhost nova_compute[237268]: cdrom Oct 5 05:33:22 localhost nova_compute[237268]: floppy Oct 5 05:33:22 localhost nova_compute[237268]: lun Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: fdc Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: sata Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: vnc Oct 5 05:33:22 localhost nova_compute[237268]: egl-headless Oct 5 05:33:22 localhost nova_compute[237268]: dbus Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: subsystem Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: mandatory Oct 5 05:33:22 localhost nova_compute[237268]: requisite Oct 5 05:33:22 localhost nova_compute[237268]: optional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: pci Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: random Oct 5 05:33:22 localhost nova_compute[237268]: egd Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: path Oct 5 05:33:22 localhost nova_compute[237268]: handle Oct 5 05:33:22 localhost nova_compute[237268]: virtiofs Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: tpm-tis Oct 5 05:33:22 localhost nova_compute[237268]: tpm-crb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: emulator Oct 5 05:33:22 localhost nova_compute[237268]: external Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 2.0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pty Oct 5 05:33:22 localhost nova_compute[237268]: unix Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: passt Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: isa Oct 5 05:33:22 localhost nova_compute[237268]: hyperv Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: relaxed Oct 5 05:33:22 localhost nova_compute[237268]: vapic Oct 5 05:33:22 localhost nova_compute[237268]: spinlocks Oct 5 05:33:22 localhost nova_compute[237268]: vpindex Oct 5 05:33:22 localhost nova_compute[237268]: runtime Oct 5 05:33:22 localhost nova_compute[237268]: synic Oct 5 05:33:22 localhost nova_compute[237268]: stimer Oct 5 05:33:22 localhost nova_compute[237268]: reset Oct 5 05:33:22 localhost nova_compute[237268]: vendor_id Oct 5 05:33:22 localhost nova_compute[237268]: frequencies Oct 5 05:33:22 localhost nova_compute[237268]: reenlightenment Oct 5 05:33:22 localhost nova_compute[237268]: tlbflush Oct 5 05:33:22 localhost nova_compute[237268]: ipi Oct 5 05:33:22 localhost nova_compute[237268]: avic Oct 5 05:33:22 localhost nova_compute[237268]: emsr_bitmap Oct 5 05:33:22 localhost nova_compute[237268]: xmm_input Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.214 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/libexec/qemu-kvm Oct 5 05:33:22 localhost nova_compute[237268]: kvm Oct 5 05:33:22 localhost nova_compute[237268]: pc-i440fx-rhel7.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: i686 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: rom Oct 5 05:33:22 localhost nova_compute[237268]: pflash Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: yes Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: AMD Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 486 Oct 5 05:33:22 localhost nova_compute[237268]: 486-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Conroe Oct 5 05:33:22 localhost nova_compute[237268]: Conroe-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-IBPB Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v4 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v1 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v2 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v6 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v7 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Penryn Oct 5 05:33:22 localhost nova_compute[237268]: Penryn-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Westmere Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v2 Oct 5 05:33:22 localhost nova_compute[237268]: athlon Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: athlon-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: kvm32 Oct 5 05:33:22 localhost nova_compute[237268]: kvm32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: n270 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: n270-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pentium Oct 5 05:33:22 localhost nova_compute[237268]: pentium-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: phenom Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: phenom-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu32 Oct 5 05:33:22 localhost nova_compute[237268]: qemu32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: file Oct 5 05:33:22 localhost nova_compute[237268]: anonymous Oct 5 05:33:22 localhost nova_compute[237268]: memfd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: disk Oct 5 05:33:22 localhost nova_compute[237268]: cdrom Oct 5 05:33:22 localhost nova_compute[237268]: floppy Oct 5 05:33:22 localhost nova_compute[237268]: lun Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: ide Oct 5 05:33:22 localhost nova_compute[237268]: fdc Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: sata Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: vnc Oct 5 05:33:22 localhost nova_compute[237268]: egl-headless Oct 5 05:33:22 localhost nova_compute[237268]: dbus Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: subsystem Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: mandatory Oct 5 05:33:22 localhost nova_compute[237268]: requisite Oct 5 05:33:22 localhost nova_compute[237268]: optional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: pci Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: random Oct 5 05:33:22 localhost nova_compute[237268]: egd Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: path Oct 5 05:33:22 localhost nova_compute[237268]: handle Oct 5 05:33:22 localhost nova_compute[237268]: virtiofs Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: tpm-tis Oct 5 05:33:22 localhost nova_compute[237268]: tpm-crb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: emulator Oct 5 05:33:22 localhost nova_compute[237268]: external Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 2.0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pty Oct 5 05:33:22 localhost nova_compute[237268]: unix Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: passt Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: isa Oct 5 05:33:22 localhost nova_compute[237268]: hyperv Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: relaxed Oct 5 05:33:22 localhost nova_compute[237268]: vapic Oct 5 05:33:22 localhost nova_compute[237268]: spinlocks Oct 5 05:33:22 localhost nova_compute[237268]: vpindex Oct 5 05:33:22 localhost nova_compute[237268]: runtime Oct 5 05:33:22 localhost nova_compute[237268]: synic Oct 5 05:33:22 localhost nova_compute[237268]: stimer Oct 5 05:33:22 localhost nova_compute[237268]: reset Oct 5 05:33:22 localhost nova_compute[237268]: vendor_id Oct 5 05:33:22 localhost nova_compute[237268]: frequencies Oct 5 05:33:22 localhost nova_compute[237268]: reenlightenment Oct 5 05:33:22 localhost nova_compute[237268]: tlbflush Oct 5 05:33:22 localhost nova_compute[237268]: ipi Oct 5 05:33:22 localhost nova_compute[237268]: avic Oct 5 05:33:22 localhost nova_compute[237268]: emsr_bitmap Oct 5 05:33:22 localhost nova_compute[237268]: xmm_input Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.256 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.263 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/libexec/qemu-kvm Oct 5 05:33:22 localhost nova_compute[237268]: kvm Oct 5 05:33:22 localhost nova_compute[237268]: pc-q35-rhel9.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: x86_64 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: efi Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: rom Oct 5 05:33:22 localhost nova_compute[237268]: pflash Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: yes Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: yes Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: AMD Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 486 Oct 5 05:33:22 localhost nova_compute[237268]: 486-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Conroe Oct 5 05:33:22 localhost nova_compute[237268]: Conroe-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-IBPB Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v4 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v1 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v2 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v6 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v7 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Penryn Oct 5 05:33:22 localhost nova_compute[237268]: Penryn-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Westmere Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v2 Oct 5 05:33:22 localhost nova_compute[237268]: athlon Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: athlon-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: kvm32 Oct 5 05:33:22 localhost nova_compute[237268]: kvm32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: n270 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: n270-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pentium Oct 5 05:33:22 localhost nova_compute[237268]: pentium-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: phenom Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: phenom-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu32 Oct 5 05:33:22 localhost nova_compute[237268]: qemu32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: file Oct 5 05:33:22 localhost nova_compute[237268]: anonymous Oct 5 05:33:22 localhost nova_compute[237268]: memfd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: disk Oct 5 05:33:22 localhost nova_compute[237268]: cdrom Oct 5 05:33:22 localhost nova_compute[237268]: floppy Oct 5 05:33:22 localhost nova_compute[237268]: lun Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: fdc Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: sata Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: vnc Oct 5 05:33:22 localhost nova_compute[237268]: egl-headless Oct 5 05:33:22 localhost nova_compute[237268]: dbus Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: subsystem Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: mandatory Oct 5 05:33:22 localhost nova_compute[237268]: requisite Oct 5 05:33:22 localhost nova_compute[237268]: optional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: pci Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: random Oct 5 05:33:22 localhost nova_compute[237268]: egd Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: path Oct 5 05:33:22 localhost nova_compute[237268]: handle Oct 5 05:33:22 localhost nova_compute[237268]: virtiofs Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: tpm-tis Oct 5 05:33:22 localhost nova_compute[237268]: tpm-crb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: emulator Oct 5 05:33:22 localhost nova_compute[237268]: external Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 2.0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pty Oct 5 05:33:22 localhost nova_compute[237268]: unix Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: passt Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: isa Oct 5 05:33:22 localhost nova_compute[237268]: hyperv Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: relaxed Oct 5 05:33:22 localhost nova_compute[237268]: vapic Oct 5 05:33:22 localhost nova_compute[237268]: spinlocks Oct 5 05:33:22 localhost nova_compute[237268]: vpindex Oct 5 05:33:22 localhost nova_compute[237268]: runtime Oct 5 05:33:22 localhost nova_compute[237268]: synic Oct 5 05:33:22 localhost nova_compute[237268]: stimer Oct 5 05:33:22 localhost nova_compute[237268]: reset Oct 5 05:33:22 localhost nova_compute[237268]: vendor_id Oct 5 05:33:22 localhost nova_compute[237268]: frequencies Oct 5 05:33:22 localhost nova_compute[237268]: reenlightenment Oct 5 05:33:22 localhost nova_compute[237268]: tlbflush Oct 5 05:33:22 localhost nova_compute[237268]: ipi Oct 5 05:33:22 localhost nova_compute[237268]: avic Oct 5 05:33:22 localhost nova_compute[237268]: emsr_bitmap Oct 5 05:33:22 localhost nova_compute[237268]: xmm_input Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.310 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/libexec/qemu-kvm Oct 5 05:33:22 localhost nova_compute[237268]: kvm Oct 5 05:33:22 localhost nova_compute[237268]: pc-i440fx-rhel7.6.0 Oct 5 05:33:22 localhost nova_compute[237268]: x86_64 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: rom Oct 5 05:33:22 localhost nova_compute[237268]: pflash Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: yes Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: no Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: on Oct 5 05:33:22 localhost nova_compute[237268]: off Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: AMD Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 486 Oct 5 05:33:22 localhost nova_compute[237268]: 486-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Broadwell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cascadelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Conroe Oct 5 05:33:22 localhost nova_compute[237268]: Conroe-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Cooperlake-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Denverton-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Dhyana-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Genoa-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-IBPB Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Milan-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-Rome-v4 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v1 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v2 Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: EPYC-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: GraniteRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Haswell-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-noTSX Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v6 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Icelake-Server-v7 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: IvyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: KnightsMill-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Nehalem-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G1-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G4-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Opteron_G5-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Penryn Oct 5 05:33:22 localhost nova_compute[237268]: Penryn-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: SandyBridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SapphireRapids-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: SierraForest-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Client-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-noTSX-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Skylake-Server-v5 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v2 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v3 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Snowridge-v4 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Westmere Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-IBRS Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Westmere-v2 Oct 5 05:33:22 localhost nova_compute[237268]: athlon Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: athlon-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: core2duo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: coreduo-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: kvm32 Oct 5 05:33:22 localhost nova_compute[237268]: kvm32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64 Oct 5 05:33:22 localhost nova_compute[237268]: kvm64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: n270 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: n270-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pentium Oct 5 05:33:22 localhost nova_compute[237268]: pentium-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2 Oct 5 05:33:22 localhost nova_compute[237268]: pentium2-v1 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3 Oct 5 05:33:22 localhost nova_compute[237268]: pentium3-v1 Oct 5 05:33:22 localhost nova_compute[237268]: phenom Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: phenom-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu32 Oct 5 05:33:22 localhost nova_compute[237268]: qemu32-v1 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64 Oct 5 05:33:22 localhost nova_compute[237268]: qemu64-v1 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: file Oct 5 05:33:22 localhost nova_compute[237268]: anonymous Oct 5 05:33:22 localhost nova_compute[237268]: memfd Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: disk Oct 5 05:33:22 localhost nova_compute[237268]: cdrom Oct 5 05:33:22 localhost nova_compute[237268]: floppy Oct 5 05:33:22 localhost nova_compute[237268]: lun Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: ide Oct 5 05:33:22 localhost nova_compute[237268]: fdc Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: sata Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: vnc Oct 5 05:33:22 localhost nova_compute[237268]: egl-headless Oct 5 05:33:22 localhost nova_compute[237268]: dbus Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: subsystem Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: mandatory Oct 5 05:33:22 localhost nova_compute[237268]: requisite Oct 5 05:33:22 localhost nova_compute[237268]: optional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: pci Oct 5 05:33:22 localhost nova_compute[237268]: scsi Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: virtio Oct 5 05:33:22 localhost nova_compute[237268]: virtio-transitional Oct 5 05:33:22 localhost nova_compute[237268]: virtio-non-transitional Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: random Oct 5 05:33:22 localhost nova_compute[237268]: egd Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: path Oct 5 05:33:22 localhost nova_compute[237268]: handle Oct 5 05:33:22 localhost nova_compute[237268]: virtiofs Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: tpm-tis Oct 5 05:33:22 localhost nova_compute[237268]: tpm-crb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: emulator Oct 5 05:33:22 localhost nova_compute[237268]: external Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: 2.0 Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: usb Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: pty Oct 5 05:33:22 localhost nova_compute[237268]: unix Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: qemu Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: builtin Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: default Oct 5 05:33:22 localhost nova_compute[237268]: passt Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: isa Oct 5 05:33:22 localhost nova_compute[237268]: hyperv Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: relaxed Oct 5 05:33:22 localhost nova_compute[237268]: vapic Oct 5 05:33:22 localhost nova_compute[237268]: spinlocks Oct 5 05:33:22 localhost nova_compute[237268]: vpindex Oct 5 05:33:22 localhost nova_compute[237268]: runtime Oct 5 05:33:22 localhost nova_compute[237268]: synic Oct 5 05:33:22 localhost nova_compute[237268]: stimer Oct 5 05:33:22 localhost nova_compute[237268]: reset Oct 5 05:33:22 localhost nova_compute[237268]: vendor_id Oct 5 05:33:22 localhost nova_compute[237268]: frequencies Oct 5 05:33:22 localhost nova_compute[237268]: reenlightenment Oct 5 05:33:22 localhost nova_compute[237268]: tlbflush Oct 5 05:33:22 localhost nova_compute[237268]: ipi Oct 5 05:33:22 localhost nova_compute[237268]: avic Oct 5 05:33:22 localhost nova_compute[237268]: emsr_bitmap Oct 5 05:33:22 localhost nova_compute[237268]: xmm_input Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: Oct 5 05:33:22 localhost nova_compute[237268]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.348 2 DEBUG nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.348 2 INFO nova.virt.libvirt.host [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Secure Boot support detected#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.351 2 INFO nova.virt.libvirt.driver [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.351 2 INFO nova.virt.libvirt.driver [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.365 2 DEBUG nova.virt.libvirt.driver [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.394 2 INFO nova.virt.node [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Determined node identity 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from /var/lib/nova/compute_id#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.410 2 DEBUG nova.compute.manager [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Verified node 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c matches my host np0005471150.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.444 2 DEBUG nova.compute.manager [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.448 2 DEBUG nova.virt.libvirt.vif [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T08:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005471150.localdomain',hostname='test',id=2,image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T08:30:14Z,launched_on='np0005471150.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005471150.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='8b36437b65444bcdac75beef77b6981e',ramdisk_id='',reservation_id='r-dff44nva',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-10-05T08:30:14Z,user_data=None,user_id='8d17cd5027274bc5883e2354d4ddec6b',uuid=2b20c302-a8d1-4ee0-990b-24973ca23df1,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.448 2 DEBUG nova.network.os_vif_util [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Converting VIF {"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.449 2 DEBUG nova.network.os_vif_util [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.450 2 DEBUG os_vif [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 05:33:22 localhost python3.9[237633]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.533 2 DEBUG ovsdbapp.backend.ovs_idl [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.533 2 DEBUG ovsdbapp.backend.ovs_idl [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.534 2 DEBUG ovsdbapp.backend.ovs_idl [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:33:22 localhost nova_compute[237268]: 2025-10-05 09:33:22.561 2 INFO oslo.privsep.daemon [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpnkjbfkm6/privsep.sock']#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.165 2 INFO oslo.privsep.daemon [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.057 40 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.062 40 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.065 40 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.065 40 INFO oslo.privsep.daemon [-] privsep daemon running as pid 40#033[00m Oct 5 05:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4db5c636-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.485 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4db5c636-30, col_values=(('external_ids', {'iface-id': '4db5c636-3094-4e86-9093-8123489e64be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:2c:a3', 'vm-uuid': '2b20c302-a8d1-4ee0-990b-24973ca23df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.487 2 INFO os_vif [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30')#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.488 2 DEBUG nova.compute.manager [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.495 2 DEBUG nova.compute.manager [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Oct 5 05:33:23 localhost nova_compute[237268]: 2025-10-05 09:33:23.495 2 INFO nova.compute.manager [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 5 05:33:23 localhost podman[237752]: 2025-10-05 09:33:23.558612059 +0000 UTC m=+0.096747105 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:33:23 localhost podman[237752]: 2025-10-05 09:33:23.566123965 +0000 UTC m=+0.104259041 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid) Oct 5 05:33:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:33:23 localhost python3.9[237751]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:33:23 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 114.7 (382 of 333 items), suggesting rotation. Oct 5 05:33:23 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:33:23 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:23 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.045 2 INFO nova.service [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Updating service version for nova-compute on np0005471150.localdomain from 57 to 66#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.076 2 DEBUG oslo_concurrency.lockutils [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.076 2 DEBUG oslo_concurrency.lockutils [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.077 2 DEBUG oslo_concurrency.lockutils [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.077 2 DEBUG nova.compute.resource_tracker [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.078 2 DEBUG oslo_concurrency.processutils [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47583 DF PROTO=TCP SPT=39730 DPT=9882 SEQ=2837407388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6029D0000000001030307) Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.532 2 DEBUG oslo_concurrency.processutils [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.628 2 DEBUG nova.virt.libvirt.driver [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.628 2 DEBUG nova.virt.libvirt.driver [None req-37d91f93-26db-409e-88af-c5a15814108f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:33:24 localhost systemd[1]: Starting libvirt nodedev daemon... Oct 5 05:33:24 localhost python3.9[237928]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:33:24 localhost systemd[1]: Started libvirt nodedev daemon. Oct 5 05:33:24 localhost systemd[1]: Stopping nova_compute container... Oct 5 05:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:33:24 localhost systemd[1]: tmp-crun.IXzhH1.mount: Deactivated successfully. Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.792 2 DEBUG oslo_concurrency.lockutils [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.793 2 DEBUG oslo_concurrency.lockutils [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:33:24 localhost nova_compute[237268]: 2025-10-05 09:33:24.793 2 DEBUG oslo_concurrency.lockutils [None req-0b1f4921-372d-4011-a699-6f76580162cb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:33:24 localhost podman[237955]: 2025-10-05 09:33:24.863578553 +0000 UTC m=+0.137795977 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd) Oct 5 05:33:24 localhost podman[237955]: 2025-10-05 09:33:24.905808814 +0000 UTC m=+0.180026238 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:33:24 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:33:25 localhost journal[207037]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, ) Oct 5 05:33:25 localhost journal[207037]: hostname: np0005471150.localdomain Oct 5 05:33:25 localhost journal[207037]: End of file while reading data: Input/output error Oct 5 05:33:25 localhost systemd[1]: libpod-dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4.scope: Deactivated successfully. Oct 5 05:33:25 localhost systemd[1]: libpod-dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4.scope: Consumed 4.113s CPU time. Oct 5 05:33:25 localhost podman[237954]: 2025-10-05 09:33:25.211622721 +0000 UTC m=+0.490403148 container died dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c-merged.mount: Deactivated successfully. Oct 5 05:33:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4-userdata-shm.mount: Deactivated successfully. Oct 5 05:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:33:28 localhost podman[238261]: 2025-10-05 09:33:28.446222715 +0000 UTC m=+0.602308386 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:33:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47584 DF PROTO=TCP SPT=39730 DPT=9882 SEQ=2837407388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6125E0000000001030307) Oct 5 05:33:29 localhost podman[238261]: 2025-10-05 09:33:29.464703717 +0000 UTC m=+1.620789358 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:33:30 localhost podman[237954]: 2025-10-05 09:33:30.211812132 +0000 UTC m=+5.490592579 container cleanup dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:33:30 localhost podman[237954]: nova_compute Oct 5 05:33:30 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:33:30 localhost podman[238287]: 2025-10-05 09:33:30.304837275 +0000 UTC m=+0.059191730 container cleanup dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3) Oct 5 05:33:30 localhost podman[238287]: nova_compute Oct 5 05:33:30 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Oct 5 05:33:30 localhost systemd[1]: Stopped nova_compute container. Oct 5 05:33:30 localhost systemd[1]: Starting nova_compute container... Oct 5 05:33:30 localhost systemd[1]: Started libcrun container. Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost podman[238300]: 2025-10-05 09:33:30.447696936 +0000 UTC m=+0.113178210 container init dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 05:33:30 localhost podman[238300]: 2025-10-05 09:33:30.463648849 +0000 UTC m=+0.129130123 container start dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:33:30 localhost podman[238300]: nova_compute Oct 5 05:33:30 localhost nova_compute[238314]: + sudo -E kolla_set_configs Oct 5 05:33:30 localhost systemd[1]: Started nova_compute container. Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Validating config file Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying service configuration files Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /etc/ceph Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Creating directory /etc/ceph Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/ceph Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Writing out command to execute Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238314]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238314]: ++ cat /run_command Oct 5 05:33:30 localhost nova_compute[238314]: + CMD=nova-compute Oct 5 05:33:30 localhost nova_compute[238314]: + ARGS= Oct 5 05:33:30 localhost nova_compute[238314]: + sudo kolla_copy_cacerts Oct 5 05:33:30 localhost nova_compute[238314]: + [[ ! -n '' ]] Oct 5 05:33:30 localhost nova_compute[238314]: + . kolla_extend_start Oct 5 05:33:30 localhost nova_compute[238314]: + echo 'Running command: '\''nova-compute'\''' Oct 5 05:33:30 localhost nova_compute[238314]: Running command: 'nova-compute' Oct 5 05:33:30 localhost nova_compute[238314]: + umask 0022 Oct 5 05:33:30 localhost nova_compute[238314]: + exec nova-compute Oct 5 05:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:33:31 localhost podman[238435]: 2025-10-05 09:33:31.296268731 +0000 UTC m=+0.092593843 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:33:31 localhost podman[238435]: 2025-10-05 09:33:31.328793313 +0000 UTC m=+0.125118405 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:33:31 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:33:31 localhost systemd[1]: tmp-crun.wtMg1Q.mount: Deactivated successfully. Oct 5 05:33:31 localhost python3.9[238436]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:33:31 localhost systemd[1]: Started libpod-conmon-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710.scope. Oct 5 05:33:31 localhost systemd[1]: Started libcrun container. Oct 5 05:33:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:31 localhost podman[238478]: 2025-10-05 09:33:31.743618647 +0000 UTC m=+0.147389624 container init e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 05:33:31 localhost podman[238478]: 2025-10-05 09:33:31.755221693 +0000 UTC m=+0.158992670 container start e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:33:31 localhost python3.9[238436]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Applying nova statedir ownership Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1 already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1 to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1/console.log Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/36c8e772b4cca487d730e1df6ad67360170775c3 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-36c8e772b4cca487d730e1df6ad67360170775c3 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928 Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7bff446e28da7b7609613334d4f266c2377bdec4e8e9a595eeb621178e5df9fb Oct 5 05:33:31 localhost nova_compute_init[238500]: INFO:nova_statedir:Nova statedir ownership complete Oct 5 05:33:31 localhost systemd[1]: libpod-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710.scope: Deactivated successfully. Oct 5 05:33:31 localhost podman[238498]: 2025-10-05 09:33:31.835198663 +0000 UTC m=+0.060356678 container died e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Oct 5 05:33:31 localhost podman[238514]: 2025-10-05 09:33:31.962498632 +0000 UTC m=+0.124090720 container cleanup e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible) Oct 5 05:33:31 localhost systemd[1]: libpod-conmon-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710.scope: Deactivated successfully. Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.227 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.228 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.228 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.228 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.343 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.368 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:32 localhost systemd[1]: var-lib-containers-storage-overlay-e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808-merged.mount: Deactivated successfully. Oct 5 05:33:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710-userdata-shm.mount: Deactivated successfully. Oct 5 05:33:32 localhost systemd[1]: session-55.scope: Deactivated successfully. Oct 5 05:33:32 localhost systemd[1]: session-55.scope: Consumed 2min 34.760s CPU time. Oct 5 05:33:32 localhost systemd-logind[760]: Session 55 logged out. Waiting for processes to exit. Oct 5 05:33:32 localhost systemd-logind[760]: Removed session 55. Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.847 2 INFO nova.virt.driver [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.974 2 INFO nova.compute.provider_config [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.982 2 WARNING nova.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.983 2 DEBUG oslo_concurrency.lockutils [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.983 2 DEBUG oslo_concurrency.lockutils [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.984 2 DEBUG oslo_concurrency.lockutils [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.984 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] console_host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.997 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.997 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238314]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.001 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.001 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.001 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.001 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.003 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.003 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.003 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.006 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.006 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.006 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.008 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.008 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.008 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.009 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.009 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.009 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.011 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.011 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.011 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.078 2 WARNING oslo_config.cfg [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 5 05:33:33 localhost nova_compute[238314]: live_migration_uri is deprecated for removal in favor of two other options that Oct 5 05:33:33 localhost nova_compute[238314]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 5 05:33:33 localhost nova_compute[238314]: and ``live_migration_inbound_addr`` respectively. Oct 5 05:33:33 localhost nova_compute[238314]: ). Its value may be silently ignored in the future.#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rbd_secret_uuid = 659062ac-50b4-5607-b699-3105da7f55ee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.133 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.134 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.135 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.136 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.137 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.138 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.139 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.139 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.139 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.139 2 DEBUG oslo_service.service [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.140 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.151 2 INFO nova.virt.node [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Determined node identity 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from /var/lib/nova/compute_id#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.151 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.152 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.152 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.152 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.162 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.164 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.165 2 INFO nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Connection event '1' reason 'None'#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.183 2 DEBUG nova.virt.libvirt.volume.mount [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.184 2 INFO nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Libvirt host capabilities Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 8a2ee9a2-7fe7-4677-a151-037462d3ba7a Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: x86_64 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238314]: AMD Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: tcp Oct 5 05:33:33 localhost nova_compute[238314]: rdma Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 16116612 Oct 5 05:33:33 localhost nova_compute[238314]: 4029153 Oct 5 05:33:33 localhost nova_compute[238314]: 0 Oct 5 05:33:33 localhost nova_compute[238314]: 0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: selinux Oct 5 05:33:33 localhost nova_compute[238314]: 0 Oct 5 05:33:33 localhost nova_compute[238314]: system_u:system_r:svirt_t:s0 Oct 5 05:33:33 localhost nova_compute[238314]: system_u:system_r:svirt_tcg_t:s0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: dac Oct 5 05:33:33 localhost nova_compute[238314]: 0 Oct 5 05:33:33 localhost nova_compute[238314]: +107:+107 Oct 5 05:33:33 localhost nova_compute[238314]: +107:+107 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: hvm Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 32 Oct 5 05:33:33 localhost nova_compute[238314]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238314]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: q35 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.4.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.5.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.3.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.4.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.2.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.2.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.0.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.0.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.1.0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: hvm Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 64 Oct 5 05:33:33 localhost nova_compute[238314]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238314]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: q35 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.4.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.5.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.3.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.4.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.2.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.2.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.0.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.0.0 Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel8.1.0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: #033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.196 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.201 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238314]: kvm Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: i686 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: rom Oct 5 05:33:33 localhost nova_compute[238314]: pflash Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: yes Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: AMD Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 486 Oct 5 05:33:33 localhost nova_compute[238314]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Conroe Oct 5 05:33:33 localhost nova_compute[238314]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Penryn Oct 5 05:33:33 localhost nova_compute[238314]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Westmere Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238314]: athlon Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: kvm32 Oct 5 05:33:33 localhost nova_compute[238314]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: n270 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pentium Oct 5 05:33:33 localhost nova_compute[238314]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: phenom Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu32 Oct 5 05:33:33 localhost nova_compute[238314]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: file Oct 5 05:33:33 localhost nova_compute[238314]: anonymous Oct 5 05:33:33 localhost nova_compute[238314]: memfd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: disk Oct 5 05:33:33 localhost nova_compute[238314]: cdrom Oct 5 05:33:33 localhost nova_compute[238314]: floppy Oct 5 05:33:33 localhost nova_compute[238314]: lun Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: fdc Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: sata Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: vnc Oct 5 05:33:33 localhost nova_compute[238314]: egl-headless Oct 5 05:33:33 localhost nova_compute[238314]: dbus Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: subsystem Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: mandatory Oct 5 05:33:33 localhost nova_compute[238314]: requisite Oct 5 05:33:33 localhost nova_compute[238314]: optional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: pci Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: random Oct 5 05:33:33 localhost nova_compute[238314]: egd Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: path Oct 5 05:33:33 localhost nova_compute[238314]: handle Oct 5 05:33:33 localhost nova_compute[238314]: virtiofs Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238314]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: emulator Oct 5 05:33:33 localhost nova_compute[238314]: external Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 2.0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pty Oct 5 05:33:33 localhost nova_compute[238314]: unix Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: passt Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: isa Oct 5 05:33:33 localhost nova_compute[238314]: hyperv Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: relaxed Oct 5 05:33:33 localhost nova_compute[238314]: vapic Oct 5 05:33:33 localhost nova_compute[238314]: spinlocks Oct 5 05:33:33 localhost nova_compute[238314]: vpindex Oct 5 05:33:33 localhost nova_compute[238314]: runtime Oct 5 05:33:33 localhost nova_compute[238314]: synic Oct 5 05:33:33 localhost nova_compute[238314]: stimer Oct 5 05:33:33 localhost nova_compute[238314]: reset Oct 5 05:33:33 localhost nova_compute[238314]: vendor_id Oct 5 05:33:33 localhost nova_compute[238314]: frequencies Oct 5 05:33:33 localhost nova_compute[238314]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238314]: tlbflush Oct 5 05:33:33 localhost nova_compute[238314]: ipi Oct 5 05:33:33 localhost nova_compute[238314]: avic Oct 5 05:33:33 localhost nova_compute[238314]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238314]: xmm_input Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.209 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238314]: kvm Oct 5 05:33:33 localhost nova_compute[238314]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: i686 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: rom Oct 5 05:33:33 localhost nova_compute[238314]: pflash Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: yes Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: AMD Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 486 Oct 5 05:33:33 localhost nova_compute[238314]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Conroe Oct 5 05:33:33 localhost nova_compute[238314]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Penryn Oct 5 05:33:33 localhost nova_compute[238314]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Westmere Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238314]: athlon Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: kvm32 Oct 5 05:33:33 localhost nova_compute[238314]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: n270 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pentium Oct 5 05:33:33 localhost nova_compute[238314]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: phenom Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu32 Oct 5 05:33:33 localhost nova_compute[238314]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: file Oct 5 05:33:33 localhost nova_compute[238314]: anonymous Oct 5 05:33:33 localhost nova_compute[238314]: memfd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: disk Oct 5 05:33:33 localhost nova_compute[238314]: cdrom Oct 5 05:33:33 localhost nova_compute[238314]: floppy Oct 5 05:33:33 localhost nova_compute[238314]: lun Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: ide Oct 5 05:33:33 localhost nova_compute[238314]: fdc Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: sata Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: vnc Oct 5 05:33:33 localhost nova_compute[238314]: egl-headless Oct 5 05:33:33 localhost nova_compute[238314]: dbus Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: subsystem Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: mandatory Oct 5 05:33:33 localhost nova_compute[238314]: requisite Oct 5 05:33:33 localhost nova_compute[238314]: optional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: pci Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: random Oct 5 05:33:33 localhost nova_compute[238314]: egd Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: path Oct 5 05:33:33 localhost nova_compute[238314]: handle Oct 5 05:33:33 localhost nova_compute[238314]: virtiofs Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238314]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: emulator Oct 5 05:33:33 localhost nova_compute[238314]: external Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 2.0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pty Oct 5 05:33:33 localhost nova_compute[238314]: unix Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: passt Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: isa Oct 5 05:33:33 localhost nova_compute[238314]: hyperv Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: relaxed Oct 5 05:33:33 localhost nova_compute[238314]: vapic Oct 5 05:33:33 localhost nova_compute[238314]: spinlocks Oct 5 05:33:33 localhost nova_compute[238314]: vpindex Oct 5 05:33:33 localhost nova_compute[238314]: runtime Oct 5 05:33:33 localhost nova_compute[238314]: synic Oct 5 05:33:33 localhost nova_compute[238314]: stimer Oct 5 05:33:33 localhost nova_compute[238314]: reset Oct 5 05:33:33 localhost nova_compute[238314]: vendor_id Oct 5 05:33:33 localhost nova_compute[238314]: frequencies Oct 5 05:33:33 localhost nova_compute[238314]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238314]: tlbflush Oct 5 05:33:33 localhost nova_compute[238314]: ipi Oct 5 05:33:33 localhost nova_compute[238314]: avic Oct 5 05:33:33 localhost nova_compute[238314]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238314]: xmm_input Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.246 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.251 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238314]: kvm Oct 5 05:33:33 localhost nova_compute[238314]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: x86_64 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: efi Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: rom Oct 5 05:33:33 localhost nova_compute[238314]: pflash Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: yes Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: yes Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: AMD Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 486 Oct 5 05:33:33 localhost nova_compute[238314]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Conroe Oct 5 05:33:33 localhost nova_compute[238314]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Penryn Oct 5 05:33:33 localhost nova_compute[238314]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Westmere Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238314]: athlon Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: kvm32 Oct 5 05:33:33 localhost nova_compute[238314]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: n270 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pentium Oct 5 05:33:33 localhost nova_compute[238314]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: phenom Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu32 Oct 5 05:33:33 localhost nova_compute[238314]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: file Oct 5 05:33:33 localhost nova_compute[238314]: anonymous Oct 5 05:33:33 localhost nova_compute[238314]: memfd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: disk Oct 5 05:33:33 localhost nova_compute[238314]: cdrom Oct 5 05:33:33 localhost nova_compute[238314]: floppy Oct 5 05:33:33 localhost nova_compute[238314]: lun Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: fdc Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: sata Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: vnc Oct 5 05:33:33 localhost nova_compute[238314]: egl-headless Oct 5 05:33:33 localhost nova_compute[238314]: dbus Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: subsystem Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: mandatory Oct 5 05:33:33 localhost nova_compute[238314]: requisite Oct 5 05:33:33 localhost nova_compute[238314]: optional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: pci Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: random Oct 5 05:33:33 localhost nova_compute[238314]: egd Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: path Oct 5 05:33:33 localhost nova_compute[238314]: handle Oct 5 05:33:33 localhost nova_compute[238314]: virtiofs Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238314]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: emulator Oct 5 05:33:33 localhost nova_compute[238314]: external Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 2.0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pty Oct 5 05:33:33 localhost nova_compute[238314]: unix Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: passt Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: isa Oct 5 05:33:33 localhost nova_compute[238314]: hyperv Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: relaxed Oct 5 05:33:33 localhost nova_compute[238314]: vapic Oct 5 05:33:33 localhost nova_compute[238314]: spinlocks Oct 5 05:33:33 localhost nova_compute[238314]: vpindex Oct 5 05:33:33 localhost nova_compute[238314]: runtime Oct 5 05:33:33 localhost nova_compute[238314]: synic Oct 5 05:33:33 localhost nova_compute[238314]: stimer Oct 5 05:33:33 localhost nova_compute[238314]: reset Oct 5 05:33:33 localhost nova_compute[238314]: vendor_id Oct 5 05:33:33 localhost nova_compute[238314]: frequencies Oct 5 05:33:33 localhost nova_compute[238314]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238314]: tlbflush Oct 5 05:33:33 localhost nova_compute[238314]: ipi Oct 5 05:33:33 localhost nova_compute[238314]: avic Oct 5 05:33:33 localhost nova_compute[238314]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238314]: xmm_input Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.302 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238314]: kvm Oct 5 05:33:33 localhost nova_compute[238314]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238314]: x86_64 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: rom Oct 5 05:33:33 localhost nova_compute[238314]: pflash Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: yes Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: no Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: on Oct 5 05:33:33 localhost nova_compute[238314]: off Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: AMD Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 486 Oct 5 05:33:33 localhost nova_compute[238314]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Conroe Oct 5 05:33:33 localhost nova_compute[238314]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Penryn Oct 5 05:33:33 localhost nova_compute[238314]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Westmere Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238314]: athlon Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: kvm32 Oct 5 05:33:33 localhost nova_compute[238314]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64 Oct 5 05:33:33 localhost nova_compute[238314]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: n270 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pentium Oct 5 05:33:33 localhost nova_compute[238314]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2 Oct 5 05:33:33 localhost nova_compute[238314]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3 Oct 5 05:33:33 localhost nova_compute[238314]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238314]: phenom Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu32 Oct 5 05:33:33 localhost nova_compute[238314]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64 Oct 5 05:33:33 localhost nova_compute[238314]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: file Oct 5 05:33:33 localhost nova_compute[238314]: anonymous Oct 5 05:33:33 localhost nova_compute[238314]: memfd Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: disk Oct 5 05:33:33 localhost nova_compute[238314]: cdrom Oct 5 05:33:33 localhost nova_compute[238314]: floppy Oct 5 05:33:33 localhost nova_compute[238314]: lun Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: ide Oct 5 05:33:33 localhost nova_compute[238314]: fdc Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: sata Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: vnc Oct 5 05:33:33 localhost nova_compute[238314]: egl-headless Oct 5 05:33:33 localhost nova_compute[238314]: dbus Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: subsystem Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: mandatory Oct 5 05:33:33 localhost nova_compute[238314]: requisite Oct 5 05:33:33 localhost nova_compute[238314]: optional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: pci Oct 5 05:33:33 localhost nova_compute[238314]: scsi Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: virtio Oct 5 05:33:33 localhost nova_compute[238314]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238314]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: random Oct 5 05:33:33 localhost nova_compute[238314]: egd Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: path Oct 5 05:33:33 localhost nova_compute[238314]: handle Oct 5 05:33:33 localhost nova_compute[238314]: virtiofs Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238314]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: emulator Oct 5 05:33:33 localhost nova_compute[238314]: external Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: 2.0 Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: usb Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: pty Oct 5 05:33:33 localhost nova_compute[238314]: unix Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: qemu Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: builtin Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: default Oct 5 05:33:33 localhost nova_compute[238314]: passt Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: isa Oct 5 05:33:33 localhost nova_compute[238314]: hyperv Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: relaxed Oct 5 05:33:33 localhost nova_compute[238314]: vapic Oct 5 05:33:33 localhost nova_compute[238314]: spinlocks Oct 5 05:33:33 localhost nova_compute[238314]: vpindex Oct 5 05:33:33 localhost nova_compute[238314]: runtime Oct 5 05:33:33 localhost nova_compute[238314]: synic Oct 5 05:33:33 localhost nova_compute[238314]: stimer Oct 5 05:33:33 localhost nova_compute[238314]: reset Oct 5 05:33:33 localhost nova_compute[238314]: vendor_id Oct 5 05:33:33 localhost nova_compute[238314]: frequencies Oct 5 05:33:33 localhost nova_compute[238314]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238314]: tlbflush Oct 5 05:33:33 localhost nova_compute[238314]: ipi Oct 5 05:33:33 localhost nova_compute[238314]: avic Oct 5 05:33:33 localhost nova_compute[238314]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238314]: xmm_input Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: Oct 5 05:33:33 localhost nova_compute[238314]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.348 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.348 2 INFO nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Secure Boot support detected#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.350 2 INFO nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.351 2 INFO nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.364 2 DEBUG nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.405 2 INFO nova.virt.node [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Determined node identity 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from /var/lib/nova/compute_id#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.431 2 DEBUG nova.compute.manager [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Verified node 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c matches my host np0005471150.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.493 2 DEBUG nova.compute.manager [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.499 2 DEBUG nova.virt.libvirt.vif [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T08:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005471150.localdomain',hostname='test',id=2,image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T08:30:14Z,launched_on='np0005471150.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005471150.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='8b36437b65444bcdac75beef77b6981e',ramdisk_id='',reservation_id='r-dff44nva',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-10-05T08:30:14Z,user_data=None,user_id='8d17cd5027274bc5883e2354d4ddec6b',uuid=2b20c302-a8d1-4ee0-990b-24973ca23df1,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.499 2 DEBUG nova.network.os_vif_util [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Converting VIF {"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.500 2 DEBUG nova.network.os_vif_util [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.501 2 DEBUG os_vif [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.551 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.551 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.552 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:33:33 localhost nova_compute[238314]: 2025-10-05 09:33:33.581 2 INFO oslo.privsep.daemon [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp43ln0g6f/privsep.sock']#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.243 2 INFO oslo.privsep.daemon [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.129 40 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.133 40 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.137 40 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.138 40 INFO oslo.privsep.daemon [-] privsep daemon running as pid 40#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4db5c636-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4db5c636-30, col_values=(('external_ids', {'iface-id': '4db5c636-3094-4e86-9093-8123489e64be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:2c:a3', 'vm-uuid': '2b20c302-a8d1-4ee0-990b-24973ca23df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.536 2 INFO os_vif [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30')#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.536 2 DEBUG nova.compute.manager [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.540 2 DEBUG nova.compute.manager [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.541 2 INFO nova.compute.manager [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.803 2 DEBUG oslo_concurrency.lockutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.804 2 DEBUG oslo_concurrency.lockutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.804 2 DEBUG oslo_concurrency.lockutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.805 2 DEBUG nova.compute.resource_tracker [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:33:34 localhost nova_compute[238314]: 2025-10-05 09:33:34.806 2 DEBUG oslo_concurrency.processutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.266 2 DEBUG oslo_concurrency.processutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.361 2 DEBUG nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.362 2 DEBUG nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.586 2 WARNING nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.588 2 DEBUG nova.compute.resource_tracker [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12890MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.589 2 DEBUG oslo_concurrency.lockutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.589 2 DEBUG oslo_concurrency.lockutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.869 2 DEBUG nova.compute.resource_tracker [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.869 2 DEBUG nova.compute.resource_tracker [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.869 2 DEBUG nova.compute.resource_tracker [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:33:35 localhost nova_compute[238314]: 2025-10-05 09:33:35.883 2 DEBUG nova.scheduler.client.report [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.085 2 DEBUG nova.scheduler.client.report [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.086 2 DEBUG nova.compute.provider_tree [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.101 2 DEBUG nova.scheduler.client.report [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.126 2 DEBUG nova.scheduler.client.report [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.156 2 DEBUG oslo_concurrency.processutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.567 2 DEBUG oslo_concurrency.processutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.572 2 DEBUG nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 5 05:33:36 localhost nova_compute[238314]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.572 2 INFO nova.virt.libvirt.host [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.573 2 DEBUG nova.compute.provider_tree [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.574 2 DEBUG nova.virt.libvirt.driver [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.632 2 DEBUG nova.scheduler.client.report [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updated inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.632 2 DEBUG nova.compute.provider_tree [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updating resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.633 2 DEBUG nova.compute.provider_tree [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.693 2 DEBUG nova.compute.provider_tree [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Updating resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c generation from 4 to 5 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.716 2 DEBUG nova.compute.resource_tracker [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.716 2 DEBUG oslo_concurrency.lockutils [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.716 2 DEBUG nova.service [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.764 2 DEBUG nova.service [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 5 05:33:36 localhost nova_compute[238314]: 2025-10-05 09:33:36.764 2 DEBUG nova.servicegroup.drivers.db [None req-6c0027fe-4c58-484d-a4b2-8f1b2e1a9c33 - - - - - -] DB_Driver: join new ServiceGroup member np0005471150.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 5 05:33:38 localhost nova_compute[238314]: 2025-10-05 09:33:38.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:38 localhost sshd[238633]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:33:38 localhost systemd-logind[760]: New session 58 of user zuul. Oct 5 05:33:38 localhost systemd[1]: Started Session 58 of User zuul. Oct 5 05:33:38 localhost nova_compute[238314]: 2025-10-05 09:33:38.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:39 localhost python3.9[238744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:33:41 localhost python3.9[238945]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:33:41 localhost systemd[1]: Reloading. Oct 5 05:33:41 localhost systemd-rc-local-generator[238969]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:41 localhost systemd-sysv-generator[238974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:42 localhost python3.9[239088]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:33:42 localhost network[239105]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:33:42 localhost network[239106]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:33:42 localhost network[239107]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:33:43 localhost nova_compute[238314]: 2025-10-05 09:33:43.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26467 DF PROTO=TCP SPT=53498 DPT=9102 SEQ=2678858297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE64C5D0000000001030307) Oct 5 05:33:43 localhost nova_compute[238314]: 2025-10-05 09:33:43.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26468 DF PROTO=TCP SPT=53498 DPT=9102 SEQ=2678858297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6505D0000000001030307) Oct 5 05:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26469 DF PROTO=TCP SPT=53498 DPT=9102 SEQ=2678858297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6585D0000000001030307) Oct 5 05:33:46 localhost python3.9[239343]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6952 DF PROTO=TCP SPT=57394 DPT=9100 SEQ=1235877506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE659EC0000000001030307) Oct 5 05:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6953 DF PROTO=TCP SPT=57394 DPT=9100 SEQ=1235877506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE65DDD0000000001030307) Oct 5 05:33:48 localhost nova_compute[238314]: 2025-10-05 09:33:48.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:48 localhost nova_compute[238314]: 2025-10-05 09:33:48.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:48 localhost python3.9[239454]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:48 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Oct 5 05:33:48 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:33:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:49 localhost python3.9[239565]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6954 DF PROTO=TCP SPT=57394 DPT=9100 SEQ=1235877506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE665DD0000000001030307) Oct 5 05:33:50 localhost python3.9[239675]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:33:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26470 DF PROTO=TCP SPT=53498 DPT=9102 SEQ=2678858297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6681D0000000001030307) Oct 5 05:33:51 localhost python3.9[239785]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:33:52 localhost python3.9[239895]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:33:52 localhost systemd[1]: Reloading. Oct 5 05:33:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33196 DF PROTO=TCP SPT=38132 DPT=9105 SEQ=3684566243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE66E1F0000000001030307) Oct 5 05:33:52 localhost systemd-sysv-generator[239920]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:52 localhost systemd-rc-local-generator[239916]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:53 localhost python3.9[240041]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:33:53 localhost nova_compute[238314]: 2025-10-05 09:33:53.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:53 localhost nova_compute[238314]: 2025-10-05 09:33:53.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42486 DF PROTO=TCP SPT=52582 DPT=9882 SEQ=4182666311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE677DD0000000001030307) Oct 5 05:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:33:54 localhost podman[240132]: 2025-10-05 09:33:54.698491244 +0000 UTC m=+0.092777234 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:33:54 localhost podman[240132]: 2025-10-05 09:33:54.735164258 +0000 UTC m=+0.129450248 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:33:54 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:33:54 localhost python3.9[240163]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:33:55 localhost systemd[1]: tmp-crun.UMKqyI.mount: Deactivated successfully. Oct 5 05:33:55 localhost podman[240278]: 2025-10-05 09:33:55.705552056 +0000 UTC m=+0.112505949 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:33:55 localhost podman[240278]: 2025-10-05 09:33:55.71699002 +0000 UTC m=+0.123943993 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:33:55 localhost python3.9[240277]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:55 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:33:56 localhost python3.9[240406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:33:57 localhost python3.9[240492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656835.94957-359-253784070014089/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=b92417fe429db3872426034792498951fd910213 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:33:57 localhost python3.9[240602]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Oct 5 05:33:58 localhost nova_compute[238314]: 2025-10-05 09:33:58.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42487 DF PROTO=TCP SPT=52582 DPT=9882 SEQ=4182666311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6879D0000000001030307) Oct 5 05:33:58 localhost nova_compute[238314]: 2025-10-05 09:33:58.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:33:59 localhost python3.9[240712]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Oct 5 05:34:00 localhost python3.9[240823]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:34:00 localhost systemd[1]: tmp-crun.UwXE5B.mount: Deactivated successfully. Oct 5 05:34:00 localhost podman[240824]: 2025-10-05 09:34:00.442861256 +0000 UTC m=+0.093933185 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller) Oct 5 05:34:00 localhost podman[240824]: 2025-10-05 09:34:00.510150833 +0000 UTC m=+0.161222762 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:34:00 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:34:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:34:01 localhost systemd[1]: tmp-crun.dccj9w.mount: Deactivated successfully. Oct 5 05:34:01 localhost podman[240942]: 2025-10-05 09:34:01.682848014 +0000 UTC m=+0.083458848 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 05:34:01 localhost podman[240942]: 2025-10-05 09:34:01.688082033 +0000 UTC m=+0.088692857 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:34:01 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:34:01 localhost python3.9[240982]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471150.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 5 05:34:03 localhost nova_compute[238314]: 2025-10-05 09:34:03.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:03 localhost python3.9[241098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:03 localhost nova_compute[238314]: 2025-10-05 09:34:03.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:03 localhost python3.9[241184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759656842.8406484-563-242838767307282/.source.conf _original_basename=ceilometer.conf follow=False checksum=307739b44452a4a1b48764f90c8d60cb24d1ca87 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:04 localhost python3.9[241292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:04 localhost python3.9[241378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759656843.993665-563-156385454829753/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:05 localhost python3.9[241486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:06 localhost python3.9[241572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759656845.1275272-563-265345152377873/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:06 localhost python3.9[241680]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:07 localhost python3.9[241788]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:07 localhost python3.9[241896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:08 localhost nova_compute[238314]: 2025-10-05 09:34:08.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:08 localhost python3.9[241982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656847.5513234-740-49857888681810/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:08 localhost nova_compute[238314]: 2025-10-05 09:34:08.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:09 localhost python3.9[242090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:09 localhost python3.9[242145]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:10 localhost python3.9[242253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:10 localhost python3.9[242339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656849.7308176-740-12025416280310/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:11 localhost python3.9[242447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:11 localhost python3.9[242533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656850.8807733-740-35843313106578/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:12 localhost python3.9[242641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:13 localhost nova_compute[238314]: 2025-10-05 09:34:13.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61262 DF PROTO=TCP SPT=44952 DPT=9102 SEQ=2635295817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6C18D0000000001030307) Oct 5 05:34:13 localhost python3.9[242727]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656852.006517-740-280404181912679/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:13 localhost nova_compute[238314]: 2025-10-05 09:34:13.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:14 localhost python3.9[242835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61263 DF PROTO=TCP SPT=44952 DPT=9102 SEQ=2635295817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6C59E0000000001030307) Oct 5 05:34:14 localhost python3.9[242921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656853.7012885-740-208950947765133/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:15 localhost python3.9[243029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:16 localhost python3.9[243115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656854.8186014-740-255225990754308/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61264 DF PROTO=TCP SPT=44952 DPT=9102 SEQ=2635295817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6CD9D0000000001030307) Oct 5 05:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11861 DF PROTO=TCP SPT=48942 DPT=9100 SEQ=998327392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6CF1C0000000001030307) Oct 5 05:34:16 localhost python3.9[243223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:17 localhost python3.9[243309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656856.4463444-740-279261949499592/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11862 DF PROTO=TCP SPT=48942 DPT=9100 SEQ=998327392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6D31D0000000001030307) Oct 5 05:34:17 localhost python3.9[243417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:18 localhost nova_compute[238314]: 2025-10-05 09:34:18.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:18 localhost python3.9[243503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656857.5145814-740-89321363380589/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:18 localhost nova_compute[238314]: 2025-10-05 09:34:18.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:19 localhost python3.9[243611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:19 localhost python3.9[243697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656858.658559-740-218670097951321/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11863 DF PROTO=TCP SPT=48942 DPT=9100 SEQ=998327392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6DB1E0000000001030307) Oct 5 05:34:20 localhost python3.9[243805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:34:20.429 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:34:20.430 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:34:20.431 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61265 DF PROTO=TCP SPT=44952 DPT=9102 SEQ=2635295817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6DD5D0000000001030307) Oct 5 05:34:20 localhost python3.9[243891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656859.7546809-740-239292461660635/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:20 localhost nova_compute[238314]: 2025-10-05 09:34:20.766 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:20 localhost nova_compute[238314]: 2025-10-05 09:34:20.785 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Triggering sync for uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Oct 5 05:34:20 localhost nova_compute[238314]: 2025-10-05 09:34:20.786 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:20 localhost nova_compute[238314]: 2025-10-05 09:34:20.786 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:20 localhost nova_compute[238314]: 2025-10-05 09:34:20.787 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:20 localhost nova_compute[238314]: 2025-10-05 09:34:20.822 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:21 localhost python3.9[244001]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6831 DF PROTO=TCP SPT=48038 DPT=9105 SEQ=1438535066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6E34F0000000001030307) Oct 5 05:34:22 localhost python3.9[244111]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:34:22 localhost systemd[1]: Reloading. Oct 5 05:34:22 localhost systemd-rc-local-generator[244133]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:22 localhost systemd-sysv-generator[244137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:22 localhost systemd[1]: Listening on Podman API Socket. Oct 5 05:34:23 localhost nova_compute[238314]: 2025-10-05 09:34:23.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:23 localhost nova_compute[238314]: 2025-10-05 09:34:23.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:23 localhost python3.9[244260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:24 localhost python3.9[244348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656863.2949636-1256-126542279070404/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48208 DF PROTO=TCP SPT=58692 DPT=9882 SEQ=1058442538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6ED1D0000000001030307) Oct 5 05:34:24 localhost python3.9[244403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:34:25 localhost systemd[1]: tmp-crun.Lrw0FE.mount: Deactivated successfully. Oct 5 05:34:25 localhost podman[244491]: 2025-10-05 09:34:25.262262697 +0000 UTC m=+0.098449575 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 5 05:34:25 localhost podman[244491]: 2025-10-05 09:34:25.278777556 +0000 UTC m=+0.114964384 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3) Oct 5 05:34:25 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:34:25 localhost python3.9[244492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656863.2949636-1256-126542279070404/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:34:26 localhost systemd[1]: tmp-crun.IZ0soU.mount: Deactivated successfully. Oct 5 05:34:26 localhost podman[244565]: 2025-10-05 09:34:26.689593911 +0000 UTC m=+0.095485947 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:34:26 localhost podman[244565]: 2025-10-05 09:34:26.728806112 +0000 UTC m=+0.134698138 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:34:26 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:34:27 localhost python3.9[244639]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Oct 5 05:34:28 localhost python3.9[244749]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:34:28 localhost nova_compute[238314]: 2025-10-05 09:34:28.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48209 DF PROTO=TCP SPT=58692 DPT=9882 SEQ=1058442538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEE6FCDD0000000001030307) Oct 5 05:34:28 localhost nova_compute[238314]: 2025-10-05 09:34:28.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:30 localhost python3[244859]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:34:30 localhost python3[244859]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "189fb56a112f774faa3c37fc532d9af434502871e8ddbdfe438285d2328ac9f5",#012 "Digest": "sha256:9aef12e39064170db87cb85373e2d10a5b618c8a9e6f50c6e9db72c91a337fc2",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9aef12e39064170db87cb85373e2d10a5b618c8a9e6f50c6e9db72c91a337fc2"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:23:37.889226851Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505025280,#012 "VirtualSize": 505025280,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:393f6536d9533e4890767f39ad657c20a3212b85c896ad1265872ed467d9b400",#012 "sha256:e3cd21c5b0533deb516897a0fc70f87f5bbfee3dc8cfa1ae1c00914563e8021d"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Oct 5 05:34:30 localhost podman[244910]: 2025-10-05 09:34:30.396742254 +0000 UTC m=+0.093951986 container remove 712972fff80955ddd90e03181c5af99e63e05290c63bed19fbebda4c6f65752e (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b9a01754dad058662a16b1bcdedd274e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 05:34:30 localhost python3[244859]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Oct 5 05:34:30 localhost podman[244924]: Oct 5 05:34:30 localhost podman[244924]: 2025-10-05 09:34:30.498476806 +0000 UTC m=+0.083598822 container create dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:34:30 localhost podman[244924]: 2025-10-05 09:34:30.458964196 +0000 UTC m=+0.044086222 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Oct 5 05:34:30 localhost python3[244859]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Oct 5 05:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:34:30 localhost podman[244949]: 2025-10-05 09:34:30.723146272 +0000 UTC m=+0.134764270 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:34:30 localhost podman[244949]: 2025-10-05 09:34:30.762909128 +0000 UTC m=+0.174527086 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 5 05:34:30 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:34:31 localhost python3.9[245096]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:34:32 localhost systemd[1]: tmp-crun.q5zH7f.mount: Deactivated successfully. Oct 5 05:34:32 localhost podman[245209]: 2025-10-05 09:34:32.085542431 +0000 UTC m=+0.097938192 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:34:32 localhost podman[245209]: 2025-10-05 09:34:32.091237582 +0000 UTC m=+0.103633373 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:34:32 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:34:32 localhost python3.9[245208]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:32 localhost nova_compute[238314]: 2025-10-05 09:34:32.424 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238314]: 2025-10-05 09:34:32.424 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238314]: 2025-10-05 09:34:32.425 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:34:32 localhost nova_compute[238314]: 2025-10-05 09:34:32.425 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:34:32 localhost python3.9[245335]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656872.255216-1448-196001796078429/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:33 localhost nova_compute[238314]: 2025-10-05 09:34:33.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:33 localhost nova_compute[238314]: 2025-10-05 09:34:33.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:33 localhost python3.9[245390]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:34:33 localhost systemd[1]: Reloading. Oct 5 05:34:33 localhost systemd-rc-local-generator[245411]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:33 localhost systemd-sysv-generator[245420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.199 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.200 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.201 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.201 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.689 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.710 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.711 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.712 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.712 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.713 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.713 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.713 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.714 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.715 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.715 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.734 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.734 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.735 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.735 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:34:34 localhost nova_compute[238314]: 2025-10-05 09:34:34.736 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:34:34 localhost python3.9[245481]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:34:34 localhost systemd[1]: Reloading. Oct 5 05:34:34 localhost systemd-sysv-generator[245534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:34 localhost systemd-rc-local-generator[245528]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:35 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.195 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:34:35 localhost systemd[1]: Started libcrun container. Oct 5 05:34:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c2df51ffe94e92ff9566b75b5f0a189f69f72e5f9f1aed18e449bfb243900d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c2df51ffe94e92ff9566b75b5f0a189f69f72e5f9f1aed18e449bfb243900d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:34:35 localhost podman[245542]: 2025-10-05 09:34:35.321003918 +0000 UTC m=+0.148254988 container init dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + sudo -E kolla_set_configs Oct 5 05:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:35 localhost podman[245542]: 2025-10-05 09:34:35.3591232 +0000 UTC m=+0.186374230 container start dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:34:35 localhost podman[245542]: ceilometer_agent_compute Oct 5 05:34:35 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.369 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.370 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Validating config file Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Copying service configuration files Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: INFO:__main__:Writing out command to execute Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: ++ cat /run_command Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + ARGS= Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + sudo kolla_copy_cacerts Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + [[ ! -n '' ]] Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + . kolla_extend_start Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + umask 0022 Oct 5 05:34:35 localhost ceilometer_agent_compute[245558]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Oct 5 05:34:35 localhost podman[245566]: 2025-10-05 09:34:35.486036901 +0000 UTC m=+0.117450690 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Oct 5 05:34:35 localhost podman[245566]: 2025-10-05 09:34:35.521372369 +0000 UTC m=+0.152786198 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:34:35 localhost podman[245566]: unhealthy Oct 5 05:34:35 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:34:35 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Failed with result 'exit-code'. Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.629 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.631 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12826MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.631 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.632 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.691 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.691 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.692 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:34:35 localhost nova_compute[238314]: 2025-10-05 09:34:35.726 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:34:36 localhost python3.9[245717]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.227 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.228 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.229 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.230 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.231 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.232 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.233 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.234 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.235 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.236 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.237 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.238 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.239 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost nova_compute[238314]: 2025-10-05 09:34:36.239 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.240 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.241 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:36 localhost nova_compute[238314]: 2025-10-05 09:34:36.246 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:34:36 localhost systemd[1]: Stopping ceilometer_agent_compute container... Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.268 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.269 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.270 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:36 localhost nova_compute[238314]: 2025-10-05 09:34:36.271 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:34:36 localhost nova_compute[238314]: 2025-10-05 09:34:36.275 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:34:36 localhost nova_compute[238314]: 2025-10-05 09:34:36.275 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.334 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.338 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.412 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.413 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.414 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.415 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.416 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.417 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.417 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.417 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.417 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.417 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.420 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.430 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.435 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.435 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.435 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Oct 5 05:34:36 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:36.819 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}133a5369671f4fb57cbade9dcbacc944f834d8fc4dab86019e71c25d3a3ce471" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.035 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Sun, 05 Oct 2025 09:34:36 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-462770b3-5ada-4ebe-aba5-b24fd41740b5 x-openstack-request-id: req-462770b3-5ada-4ebe-aba5-b24fd41740b5 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.035 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "76acf371-9e6c-4c5c-aec4-748e712efe27", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.035 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-462770b3-5ada-4ebe-aba5-b24fd41740b5 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.037 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}133a5369671f4fb57cbade9dcbacc944f834d8fc4dab86019e71c25d3a3ce471" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.072 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Sun, 05 Oct 2025 09:34:37 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d8f93c9b-0427-4551-8734-f5451e4edcfd x-openstack-request-id: req-d8f93c9b-0427-4551-8734-f5451e4edcfd _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.073 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "76acf371-9e6c-4c5c-aec4-748e712efe27", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.073 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27 used request id req-d8f93c9b-0427-4551-8734-f5451e4edcfd request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.074 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.075 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.080 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2b20c302-a8d1-4ee0-990b-24973ca23df1 / tap4db5c636-30 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.080 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4de2e2d0-4da6-436e-80f3-55237874fba0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.075473', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '825ad90a-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': '873b98b3947938e906182d6d186705369bf2967cb3baae45ccee2f991bb89adc'}]}, 'timestamp': '2025-10-05 09:34:37.081691', '_unique_id': '03befc0f68ac4b74abe36124a4f5ddd0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.088 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.092 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.120 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1213559769 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.120 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 162365672 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b116616-57f4-4ec4-93c7-42991fada241', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1213559769, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.092344', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8260ddb4-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '37fdd261bae724ba3967e29fb97b2a3cb79de4b62aec9316449d670b86e9a89f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 162365672, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.092344', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8260f2ea-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '368659b856ae522047d8e4873e21bc5f33ca16c212b6935a44c4c4dd50687a40'}]}, 'timestamp': '2025-10-05 09:34:37.121524', '_unique_id': 'e5b27307d77a45c5974adfdddd3a21b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.123 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.124 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.124 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '33566055-cd64-442e-9059-40c4db3ffec9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.124590', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '82618156-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': 'b4de85cc173c8427eb4a3a7c4f65dc52aebe5538541f8d965ec1a41ec3eccf55'}]}, 'timestamp': '2025-10-05 09:34:37.125114', '_unique_id': 'bc9810a722f64653ba1e7abf030fdad3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.126 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.127 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.127 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.127 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd7512f81-0b70-4f01-bf49-21757292f5ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.127483', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8261f190-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '19a88048c189123cb6c70bd4761fbac794af6f162561e1976ada223d235672ff'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.127483', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '82620400-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': 'd2735ef43fefabfd15301eb99ae60fad8b9289f5704ae17840dda7546e384181'}]}, 'timestamp': '2025-10-05 09:34:37.128452', '_unique_id': '8ea6c4a64124448b900af0cfaea5f1ff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.129 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.130 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.157 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 52.31640625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3db70aa3-749c-4259-b200-0501c81a295e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.31640625, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:34:37.130798', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '82669948-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.38152093, 'message_signature': '07b2be1b1a6d10c29057c6ae3a6637cb3961425c05f5d33e7604f58a0a8f98bd'}]}, 'timestamp': '2025-10-05 09:34:37.158507', '_unique_id': 'b6e0d5036c6e4050a296c7c342301463'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.159 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.161 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.161 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8782 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '988cb9dc-3384-452d-89ec-b9d88c474f70', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8782, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.161188', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '82671e90-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': '513cc0448eb38a5c82bbc02e6174fe8d0156c805ef336230b00127fe811d031c'}]}, 'timestamp': '2025-10-05 09:34:37.161892', '_unique_id': 'c7ef835ed62e43f8b968f76d46215088'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.162 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.164 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.164 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1258b844-3e3b-4e39-ac1a-5c62c948ab71', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.164194', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '82678c72-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': '848f55c1f79801e96169ea82f6dd78caecdb5af67e8ae3c593150ff74acaec06'}]}, 'timestamp': '2025-10-05 09:34:37.164701', '_unique_id': 'b830f1bf52f14786bbb853d196c92b4a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.165 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.167 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.167 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4f0da8b8-fa64-4f0e-9f6e-319f80fa373b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.167151', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8267fea0-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': '69f1c27397b92244ee265f0c8a9d900f0fe4893728775bd4527abd615780dcd6'}]}, 'timestamp': '2025-10-05 09:34:37.167655', '_unique_id': '50d72a8edfcb4838a0101427f1055835'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.168 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.169 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.187 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.188 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a445ecb3-c8b7-42d4-af69-41ff28b6113b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.169810', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '826b213e-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.393837288, 'message_signature': 'f55b5b91b74eeedc630b527321886a9d5e2b31d16d3bc4233610d9d5cc4bef26'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.169810', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '826b328c-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.393837288, 'message_signature': 'c54ce139c868a93d1fc0bb1459b98d96e4a71623ff58bd2e486b7035d2945aae'}]}, 'timestamp': '2025-10-05 09:34:37.188623', '_unique_id': 'f49df4e7a33c43d68b0dab7cbe8f5319'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.189 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.190 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.191 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 274779002 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.191 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 31348051 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c9f6cfbd-baa1-4d48-ad80-bf93fa906429', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 274779002, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.191053', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '826ba460-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '7d7b7a153b390c6c617edc110b4dab3d045b39c1b6c9c61a24be843f1da0d453'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31348051, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.191053', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '826bb680-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '10578e182cdbce697232501eab4fa1aa69da57d18722e52c146ec1c22cf6d043'}]}, 'timestamp': '2025-10-05 09:34:37.191971', '_unique_id': '5526dc94b8df490a84b98edc3ad6e367'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.192 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.194 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.194 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '326e10cb-6c13-4ab5-9cd2-b56e9b8f7b13', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.194161', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '826c1dc8-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': 'f41c53b88d503f347e0a1b4de6c3b4f2123ffe8227ae3d0e01c2d26353289286'}]}, 'timestamp': '2025-10-05 09:34:37.194657', '_unique_id': '0570493d10e547ea8f9dbc31e2497268'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.195 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.196 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.196 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.197 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.197 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.197 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.198 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd375651e-10fa-4c52-8514-8d487733f75f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.197779', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '826cab26-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': 'd6ca750007896af6c004fa1144e70010a90ac074fc2b13ce9f18f80fd0d1371c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.197779', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '826cbc9c-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': 'c6ad978dabc612b77f4cefd13be77a9a36c66fd1689d77493d3231aecb4c0c0f'}]}, 'timestamp': '2025-10-05 09:34:37.198663', '_unique_id': '148247eebd524963a77d7e23ab213798'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.199 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.200 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.200 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 73912320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.201 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '71212a63-2828-4b49-b07a-6a6cb4b4a702', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73912320, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.200898', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '826d24b6-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': 'f58d6a15de8c7d4ed271f1c2007588b615e4ac98db60e738e0e54b0580bb08a4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.200898', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '826d3636-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '53e6eaf077d73037bd6bd5df5cc92526e234255746f2578e492e20effda5db8b'}]}, 'timestamp': '2025-10-05 09:34:37.201774', '_unique_id': '1fa2db6838154178a1d9c88a848dc0a4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.202 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.203 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.204 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.204 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.204 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.204 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.204 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.205 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.205 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '96db2d87-c89c-43ea-bf63-40a39f82bd78', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.205175', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '826dccea-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': '28cca872af591882e62063e7f20cce66c8461dce8d28faf60d232919268ccf51'}]}, 'timestamp': '2025-10-05 09:34:37.205663', '_unique_id': '620ad0977b0f45eba381432a963c61be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.206 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.207 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.207 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.208 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77368217-18d5-4e78-8d95-1488f65699c5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.207781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '826e3252-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.393837288, 'message_signature': '9fc5e9c582f3fc49b010ddbdf7e6e02652abd470c3e3064d36d4a4965544fccd'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.207781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '826e438c-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.393837288, 'message_signature': '6ea24fd51e355ab3eaed86b9b28e35166021e0309fb8068e8115956d5afa910b'}]}, 'timestamp': '2025-10-05 09:34:37.208666', '_unique_id': '457c6a3a4d60463c83945a7cd459e87f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.209 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.210 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.210 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 53750000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '604b3eb7-42bc-4d87-a125-cad9a0178e6a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 53750000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:34:37.210821', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '826ea8c2-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.38152093, 'message_signature': 'ebd41a7c850eb917114e1b8f2632a4d42ab7d05e716bf4a2058d46e1feaf4ba3'}]}, 'timestamp': '2025-10-05 09:34:37.211270', '_unique_id': '35378c683ce84244b2ac89fa4166d837'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.212 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.213 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.213 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '53e882cc-2516-4f32-96d3-690de63fc7f0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.213417', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '826f0e0c-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': '551c54a481abbb37bc32987808fc0dd070221af8efc4ddfbefcaaed7566e7048'}]}, 'timestamp': '2025-10-05 09:34:37.213878', '_unique_id': 'ae3e069d681a4c20b9f4dfa041905231'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.214 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.215 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.216 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ee46704-35cc-4de5-88cb-cd1b3130759c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.215995', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '826f7266-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': 'a91981775edae61a9ad5daa4e967c6bf8be806a9fc906f49d399d5f32e4a1ffc'}]}, 'timestamp': '2025-10-05 09:34:37.216478', '_unique_id': 'ee1b4b1686cf46a7b1ef84fd59c6274b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.217 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.218 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.218 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1bbe5791-7249-4904-9e43-ddba1798f8e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:37.218582', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '826fd77e-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.299493702, 'message_signature': 'f0396bc4753f1de9d450847df0dd7fa3dbe0b20153eab2e6c5a571eb3dd05595'}]}, 'timestamp': '2025-10-05 09:34:37.219057', '_unique_id': 'f47a0d53e5d1411aad24509ce53f9ab4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.220 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.221 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.221 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.221 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9007936b-fe2f-40de-bf5a-16dd6f598b84', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.221283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8270422c-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.393837288, 'message_signature': '9404ae85c8229f83c64a12f45d52d08a90ea40b6f3852593d41bc0585403c149'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.221283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '82705294-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.393837288, 'message_signature': 'dae7a67ed46a2458774adf4507876c54d4e600ecec357804ac8a3af0c8fc466c'}]}, 'timestamp': '2025-10-05 09:34:37.222157', '_unique_id': 'f451365f10a34b7bb55e98f2dc0ee25d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.223 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.224 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.224 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.224 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.224 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.224 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 574 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.225 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7448c723-42ac-4630-ad57-71f15bce7655', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 574, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:37.224912', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8270cbac-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': 'ea1b0aa43b23de8155e1424055fffec780f7c704d20d7464d2a23af02411c6bb'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:37.224912', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8270d5ac-a1ce-11f0-b2dc-fa163ec6f33d', 'monotonic_time': 10356.316428882, 'message_signature': '69702445864ef6317fdc058e63a7b2d7cbcaf1fe31d01d7f67d74985837625cd'}]}, 'timestamp': '2025-10-05 09:34:37.225458', '_unique_id': 'd6e40879e523428698f31d80813caea6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.226 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:37 localhost journal[207037]: End of file while reading data: Input/output error Oct 5 05:34:37 localhost ceilometer_agent_compute[245558]: 2025-10-05 09:34:37.233 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Oct 5 05:34:37 localhost journal[207037]: End of file while reading data: Input/output error Oct 5 05:34:37 localhost systemd[1]: libpod-dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.scope: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: libpod-dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.scope: Consumed 1.324s CPU time. Oct 5 05:34:37 localhost podman[245723]: 2025-10-05 09:34:37.381234707 +0000 UTC m=+1.116738165 container died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:34:37 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.timer: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:34:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef-userdata-shm.mount: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: var-lib-containers-storage-overlay-b1c2df51ffe94e92ff9566b75b5f0a189f69f72e5f9f1aed18e449bfb243900d-merged.mount: Deactivated successfully. Oct 5 05:34:37 localhost podman[245723]: 2025-10-05 09:34:37.468849143 +0000 UTC m=+1.204352611 container cleanup dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:34:37 localhost podman[245723]: ceilometer_agent_compute Oct 5 05:34:37 localhost podman[245759]: 2025-10-05 09:34:37.578158306 +0000 UTC m=+0.071283524 container cleanup dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:34:37 localhost podman[245759]: ceilometer_agent_compute Oct 5 05:34:37 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: Stopped ceilometer_agent_compute container. Oct 5 05:34:37 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 5 05:34:37 localhost systemd[1]: Started libcrun container. Oct 5 05:34:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c2df51ffe94e92ff9566b75b5f0a189f69f72e5f9f1aed18e449bfb243900d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c2df51ffe94e92ff9566b75b5f0a189f69f72e5f9f1aed18e449bfb243900d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:34:37 localhost podman[245773]: 2025-10-05 09:34:37.762307816 +0000 UTC m=+0.144514658 container init dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm) Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + sudo -E kolla_set_configs Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:34:37 localhost podman[245773]: 2025-10-05 09:34:37.810363142 +0000 UTC m=+0.192569944 container start dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:34:37 localhost podman[245773]: ceilometer_agent_compute Oct 5 05:34:37 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Validating config file Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Copying service configuration files Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: INFO:__main__:Writing out command to execute Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: ++ cat /run_command Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + ARGS= Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + sudo kolla_copy_cacerts Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + [[ ! -n '' ]] Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + . kolla_extend_start Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + umask 0022 Oct 5 05:34:37 localhost ceilometer_agent_compute[245788]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Oct 5 05:34:37 localhost podman[245797]: 2025-10-05 09:34:37.915271099 +0000 UTC m=+0.101548758 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:34:37 localhost podman[245797]: 2025-10-05 09:34:37.946310233 +0000 UTC m=+0.132587852 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:34:37 localhost podman[245797]: unhealthy Oct 5 05:34:37 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:34:37 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Failed with result 'exit-code'. Oct 5 05:34:38 localhost nova_compute[238314]: 2025-10-05 09:34:38.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:38 localhost nova_compute[238314]: 2025-10-05 09:34:38.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.618 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.619 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.620 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.621 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.622 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.623 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.624 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.628 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.630 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.633 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.650 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.652 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.652 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.668 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:38 localhost python3.9[245928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.800 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.801 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.802 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.803 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.804 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.805 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.806 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.807 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.808 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.809 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.810 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.811 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.812 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.813 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.814 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.815 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.816 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.817 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.818 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.820 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Oct 5 05:34:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:38.825 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.225 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}e09f16705f8b77750fe3dc6bb014bc10acf122363d5a1aa5912ad759f74158a7" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 5 05:34:39 localhost python3.9[246022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656878.1640368-1544-142354938090613/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.430 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Sun, 05 Oct 2025 09:34:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-42a28b05-80e9-43c1-a85b-699e453afa5a x-openstack-request-id: req-42a28b05-80e9-43c1-a85b-699e453afa5a _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.431 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "76acf371-9e6c-4c5c-aec4-748e712efe27", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.431 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-42a28b05-80e9-43c1-a85b-699e453afa5a request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.433 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}e09f16705f8b77750fe3dc6bb014bc10acf122363d5a1aa5912ad759f74158a7" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.456 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Sun, 05 Oct 2025 09:34:39 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-83a47835-6540-4468-8663-950677655e23 x-openstack-request-id: req-83a47835-6540-4468-8663-950677655e23 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.456 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "76acf371-9e6c-4c5c-aec4-748e712efe27", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.456 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27 used request id req-83a47835-6540-4468-8663-950677655e23 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.458 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.459 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.488 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 73912320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.489 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4e908a8-3e02-4fe6-a7b1-a526a844b1af', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73912320, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.459926', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83ca4230-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '8271b38a88716d02f7234de5ae3a1dc81d3198be4397d48dae7d124610d142e2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.459926', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83ca5f68-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '83525314181135b6d602d98cc9e83e030aff015d5b0f8324bf2c6c59c628fed6'}]}, 'timestamp': '2025-10-05 09:34:39.490100', '_unique_id': 'accf6e4d48f24277abe974f54f0cfeef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.499 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.503 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.521 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.522 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51fd3710-5b09-4c97-a841-6b1e7cdd6db3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.503888', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83cf5130-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.728020532, 'message_signature': 'beaffded46568ea4ec1bba3277561ec94322e47bc913cf92c0fae4e22d4c5c3d'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.503888', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83cf6684-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.728020532, 'message_signature': '840f8b9ca5d9b495493b5f3d9f94867ddd25ee92a6d0fbf31159c897f729db28'}]}, 'timestamp': '2025-10-05 09:34:39.523001', '_unique_id': 'eb1961810cd342a6a6cf19427b827d8c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.524 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.525 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.529 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2b20c302-a8d1-4ee0-990b-24973ca23df1 / tap4db5c636-30 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.529 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '10420397-bf74-4f22-a17d-a6c10b7f042e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.525520', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d081cc-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': '5b899efc1c0791ebeadde8c8bc47aefa0d83f4406d21023a204988de92a0ae71'}]}, 'timestamp': '2025-10-05 09:34:39.530346', '_unique_id': '3547afe4453b4f149c0f4f8d08db842f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.531 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.532 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.532 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 574 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.533 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd06c3cac-7ba8-46b1-88d1-820a782462ec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 574, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.532735', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83d0f486-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '9e2b349681fe7e85dfefbd4b861d54c23f63a2ea46628690890d2e07d2e8e447'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.532735', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83d106e2-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': 'a8e874f9dc7d74bdadcec87b6589b1bd86a82dff4176104a87e9aa152ed85998'}]}, 'timestamp': '2025-10-05 09:34:39.533656', '_unique_id': '396fa4795d69434795c65671871f79d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.534 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.535 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.536 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.536 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7cb323c5-e477-459d-bb1b-9d0445f6f1d9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.536010', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83d17474-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.728020532, 'message_signature': 'be681bea7d6469ec605573ca2459f140ff0fb99b80e6945d1e7c65e644c7711e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.536010', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83d1872a-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.728020532, 'message_signature': '4489c97e79f5a6d70b722a8ed4f46dd9192b3cc58b8b6d3af7dfe10100a3708d'}]}, 'timestamp': '2025-10-05 09:34:39.536936', '_unique_id': '15e924d267364649a9a68f0375f62319'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.537 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.539 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.539 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8ba70839-d026-430a-aedd-7a91ec4939cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.539196', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d1f200-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': 'f94728d93822753d336c374b5c130d78d8e710a4a97ec0b4313549590c2f86a4'}]}, 'timestamp': '2025-10-05 09:34:39.539703', '_unique_id': '96b48121608b45d6894f8f76ad0b1388'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.540 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.542 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.542 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.542 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.543 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.543 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b7542159-a78e-492c-a772-e20440b33f6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.543194', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d28ea4-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': '4c21fe96dc8e076c9bad0c8f472b463f8bec292de941979d1679058a7f6a1e5e'}]}, 'timestamp': '2025-10-05 09:34:39.543717', '_unique_id': 'f317dfcfdd29451d946fe3020b3898b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.544 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.545 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.545 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1213559769 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.546 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 162365672 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cd2b043b-bee8-4785-8e2c-fefcc49c34f8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1213559769, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.545947', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83d2f86c-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '155f27728c9408cdb8556a567e906f1d82a0a0d9d18288a4e24d994a8aea8e5c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 162365672, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.545947', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83d30b4a-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '63186f21ae68f76128b7d2c6d13f55c3c0e0736371757195c70c3c758fa5a9aa'}]}, 'timestamp': '2025-10-05 09:34:39.546875', '_unique_id': 'af552129e9d84ae4920ea7840a7c97b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.547 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.548 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.549 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.549 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a302a742-2b68-44da-999d-6ffd009dfbc0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.549108', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83d373e6-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '4b0f4a0f5790a98a608656fe144b72b75efe8e09e13d7e484bb1ded6c5afd0ab'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.549108', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83d38610-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': '1c9c17b60e306dfe9e3274e326e098ffecfa0d634a515b665dde5e81a48032e1'}]}, 'timestamp': '2025-10-05 09:34:39.550010', '_unique_id': '8352426aa6644ac99596137303f09903'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.550 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.552 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.552 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16c17112-0ce9-444d-a620-97bef1b71d30', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.552237', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d3ef88-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': '861c36a39e1b9f772e8b825a0289393cb26ef900ae7f8bcd19b00455896d6b28'}]}, 'timestamp': '2025-10-05 09:34:39.552739', '_unique_id': '27071c861dd44fc8aa01b7abad82f8d6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.553 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.554 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.554 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.555 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.555 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.555 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c2d41e3d-a6a3-4e80-b9b9-2dfebf16f71c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.555553', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d47002-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': '149c90cab64c6310cd21fd3cb37df152847c1c3fb376d753a88326a498a7098e'}]}, 'timestamp': '2025-10-05 09:34:39.556029', '_unique_id': 'e5726645436f49a397d7daba648e3d4f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.556 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.558 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.558 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.558 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.558 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.558 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36cc851c-609b-4701-8a09-1cc9d7ab0ccc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.558823', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d4eea6-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': 'dbe6a5d76309e935d04a8bc441667f7e7fb2e528cdfdbf20337ac3f7f7da0705'}]}, 'timestamp': '2025-10-05 09:34:39.559180', '_unique_id': '8e83e9e7359644978574ee4210e94894'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.559 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.560 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.583 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 52.31640625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '051b8580-996c-4e97-9a58-eb9463fc442d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.31640625, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:34:39.560539', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '83d8b37e-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.806521666, 'message_signature': '7015d6b978a390cbd6343126759f0d1266d8bde16a618b768c4bc955b3b35c4a'}]}, 'timestamp': '2025-10-05 09:34:39.584033', '_unique_id': 'e892ee5dbde2451fa8bd30f5ed18d976'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.585 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.586 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.586 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.586 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.587 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.587 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 53770000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e37407b4-6916-4de6-8ea6-b597ea77e483', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 53770000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:34:39.587355', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '83d94b90-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.806521666, 'message_signature': 'db747f59673e507a39faa06814a675a3522dec56bc7e1dc9d0797a37d218ec15'}]}, 'timestamp': '2025-10-05 09:34:39.587850', '_unique_id': 'e05b1a05801742988a7ae031a2c22caa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.588 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.589 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.590 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6974f36c-cfee-4bab-a201-e2c52e4e45b3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.590067', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83d9b440-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': 'b2d1d28fbf771bcf8ad3d94fbc407b5eaa759557733f404abbe9c00bb3d75ba8'}]}, 'timestamp': '2025-10-05 09:34:39.590595', '_unique_id': 'b01dc76a785143459d1e7bd186aa1322'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.591 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.592 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.592 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.593 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e21a6cf7-99fc-4509-b362-3dc01e960b85', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.592744', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83da1c64-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.728020532, 'message_signature': '54386aaf602f1d217f3a5e1d2f958f3e57be259f4a4f4cb33559392663a314e9'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.592744', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83da2e20-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.728020532, 'message_signature': '21740775e403b1d72f45dc68a2b262e93bd15f06b9f80115217d7d202ad0538f'}]}, 'timestamp': '2025-10-05 09:34:39.593636', '_unique_id': 'b29e7fbd32c94b0c81f7260c46a8c3a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.594 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.595 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.595 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 274779002 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.596 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 31348051 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3505bb68-e5d8-4bbc-a86b-bf6c341bdf74', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 274779002, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:34:39.595851', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83da95c2-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': 'b388934688e1c80498c6cc617122d67cec438a6564805b23a2f10c73773e6a9d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31348051, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:34:39.595851', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83daa788-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.684013043, 'message_signature': 'e5fda0c31208a5b97fa19977356e0a445759b56f9184566d7560a9f9eac97a63'}]}, 'timestamp': '2025-10-05 09:34:39.596741', '_unique_id': '47dcd3a4d13840b9b4cdeb9e31489c52'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.597 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.598 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.599 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8782 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '61799b40-ad69-48d6-92d9-6db9820b5c24', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8782, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.598988', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83db1060-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': 'f328f3b61671101582954b145bd3e42313a289c919bc2e6dbf765782993545a6'}]}, 'timestamp': '2025-10-05 09:34:39.599488', '_unique_id': '16c730862d034b44bd725b11c4a06f7b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.600 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.601 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.601 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '864b1fa3-4f73-4a0d-8cc7-a23234235697', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.601649', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83db7848-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': 'f297e1bb5ae5756c6a651dac46369ebbafb1c84228d4b765a70c389521d2257a'}]}, 'timestamp': '2025-10-05 09:34:39.602115', '_unique_id': '99401c069561434f94d4741308ea2be5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.603 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.604 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.604 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9bbef3bd-9594-4869-a868-2ad821b51d8f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:34:39.604465', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '83dbe67a-a1ce-11f0-9396-fa163ec6f33d', 'monotonic_time': 10358.749561614, 'message_signature': '398ec23739e8d05286b69a16c053b412f175e2eda9b6ccc82b1d21f24063f335'}]}, 'timestamp': '2025-10-05 09:34:39.604936', '_unique_id': '8bb41f4bc87e41f7977de5382b17547f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:34:39 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:34:39.605 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:26 localhost python3.9[257352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:27 localhost rsyslogd[759]: imjournal: 5241 messages lost due to rate-limiting (20000 allowed within 600 seconds) Oct 5 05:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:40:27 localhost python3.9[257409]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2dra13i9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:27 localhost python3.9[257519]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:40:28 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:40:28 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:40:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:40:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:40:28 localhost python3.9[257576]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:40:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15771 DF PROTO=TCP SPT=47312 DPT=9882 SEQ=2966866212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEC7B1D0000000001030307) Oct 5 05:40:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:40:29 localhost python3.9[257686]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:40:29 localhost python3[257797]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:40:30 localhost nova_compute[238314]: 2025-10-05 09:40:30.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:30 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:30 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:30 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:30 localhost python3.9[257907]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:31 localhost python3.9[257964]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-93e9de2b2ec23737f94de1f8bccf918a461ddca6ddb8186432fbf946e4c1bfc0-merged.mount: Deactivated successfully. Oct 5 05:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-93e9de2b2ec23737f94de1f8bccf918a461ddca6ddb8186432fbf946e4c1bfc0-merged.mount: Deactivated successfully. Oct 5 05:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Oct 5 05:40:31 localhost podman[248506]: time="2025-10-05T09:40:31Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\"" Oct 5 05:40:31 localhost podman[248506]: @ - - [05/Oct/2025:09:35:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1" Oct 5 05:40:32 localhost python3.9[258074]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:40:32 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Oct 5 05:40:32 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:40:32 localhost podman[258077]: 2025-10-05 09:40:32.425793095 +0000 UTC m=+0.080421370 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:40:32 localhost podman[258077]: 2025-10-05 09:40:32.432801147 +0000 UTC m=+0.087429422 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 05:40:33 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:40:33 localhost systemd[1]: var-lib-containers-storage-overlay-6a2ba6fb6e64a4b58661c047727a0714e4aaa1299df5507383cf28a1ea2eccb4-merged.mount: Deactivated successfully. Oct 5 05:40:33 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:40:33 localhost python3.9[258150]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:33 localhost systemd[1]: var-lib-containers-storage-overlay-6a2ba6fb6e64a4b58661c047727a0714e4aaa1299df5507383cf28a1ea2eccb4-merged.mount: Deactivated successfully. Oct 5 05:40:33 localhost python3.9[258260]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:34 localhost python3.9[258317]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:35 localhost nova_compute[238314]: 2025-10-05 09:40:35.377 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:40:35 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:40:35 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:40:35 localhost python3.9[258427]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:35 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:40:36 localhost python3.9[258484]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:40:36 localhost podman[258594]: 2025-10-05 09:40:36.961564454 +0000 UTC m=+0.077204551 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:40:36 localhost podman[258594]: 2025-10-05 09:40:36.980746359 +0000 UTC m=+0.096386456 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git) Oct 5 05:40:37 localhost python3.9[258595]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:37 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:37 localhost python3.9[258704]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759657236.3505197-3539-228008659266022/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:37 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:40:38 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:40:38 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:40:38 localhost podman[258782]: 2025-10-05 09:40:38.208805159 +0000 UTC m=+0.092059867 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:40:38 localhost podman[258782]: 2025-10-05 09:40:38.219775209 +0000 UTC m=+0.103029957 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:40:38 localhost podman[258782]: unhealthy Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.373 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.399 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.399 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.399 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.400 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.400 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:40:38 localhost python3.9[258838]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.822 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.830 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.831 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.836 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '235b438c-46cd-4a4e-9f7a-bdc605682e5c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.831736', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '59fa63a8-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': '65ce2f8cc393c6b81455bf1459b393b10f338617acd01665cc7d6c2052ab77f4'}]}, 'timestamp': '2025-10-05 09:40:38.837223', '_unique_id': 'e2edcaaaa7404b96b277d73dac5ee194'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.838 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.840 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '543ffcda-16e9-4eab-a1de-df4a5242d89a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.840305', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59fe783a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.06422309, 'message_signature': '9d2a58d8cc6fa42398f79265981cfde11a10e0ec85451425babc6edf8e256ce7'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.840305', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59fe8e10-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.06422309, 'message_signature': '74dd4b79c5b7e28d584430410c97a841ca9e0015e3976579b5492de49424e9d1'}]}, 'timestamp': '2025-10-05 09:40:38.864590', '_unique_id': '7446d21f30244149b6c2706f39a59a41'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.867 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.882 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:40:38 localhost nova_compute[238314]: 2025-10-05 09:40:38.883 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.904 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.904 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec9c99dc-684f-4b86-bf00-e2be6dbafce7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.867337', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a04b4de-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '90f012371eb35a53a10a4e37c84c4d155b4517cec953119f76458364932c5723'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.867337', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a04caaa-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '447ca589d1597a0b0f05883d3805513a53d4a71da9e7315d03f513d25babd608'}]}, 'timestamp': '2025-10-05 09:40:38.905326', '_unique_id': '3bb078b44fab4fb5b2329c725d06f25d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.907 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.908 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8782 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3c411a1f-b3b2-4a41-bca2-b2e281d55ccb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8782, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.908000', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a0545ca-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': 'e899b6b0ef5604a95adc6ceaec04289b04f508ab25c803be561382c8c37480c5'}]}, 'timestamp': '2025-10-05 09:40:38.908533', '_unique_id': '9b0b9062299e45a8aca3952bc17e35a4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.909 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.910 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.910 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1213559769 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.911 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 162365672 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c66d0e7b-5968-4097-8a04-2314e7e27e8d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1213559769, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.910866', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a05b53c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '331b2657c3759448c85b4dc3ea5bccf0994985f1eaf677c8ed790cb831ccde81'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 162365672, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.910866', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a05c7ca-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '28da587d110ee3a40e1f07bdff63c7c10f43926aeca2c5debdff0a97c1c68387'}]}, 'timestamp': '2025-10-05 09:40:38.911841', '_unique_id': 'a13b9b8df40940a48f0e049cacd468c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.914 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.914 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fb1ca61e-d141-4e4d-b0fa-5e98539fc5ea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.914205', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a0638d6-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': '08b572247a414f611c372945721394207dc8ac584b91d3314b2f36e4c4d1c6dc'}]}, 'timestamp': '2025-10-05 09:40:38.914720', '_unique_id': '96ac8734ce6f4bacb7c702f81c37e3d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.916 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a09801d-0205-4847-90d2-c1520c8f01c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.916968', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a06a352-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': '2d0610cfdabb157a22d9f039001663092200cf12e0514d323d19a90e64140972'}]}, 'timestamp': '2025-10-05 09:40:38.917473', '_unique_id': '6c1548d0e0bb4b51b7be6159d10f2ace'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.919 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.919 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.919 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b6b5a1e-be0d-4d15-bd9a-4204804d91b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.919441', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a070068-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.06422309, 'message_signature': 'fd08168c8c65806bed682bfa9e0f9b08033c75426e9b400bbf6b7f24e344782c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.919441', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a070aea-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.06422309, 'message_signature': 'df64a8f32613cb96325b62899c013f35ac18a5b9dcce65e5b82c059e418a27d8'}]}, 'timestamp': '2025-10-05 09:40:38.919987', '_unique_id': 'bbe083c14cd94949820f6c4b4889bf39'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.921 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.921 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.921 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.921 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac40c3e2-81cf-4c00-a0d2-138a1387de9d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.921517', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a07513a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': 'bd61e8f552eba65e1bc6c751278f9ab2d269da139740592ba40d4ff43a5d0948'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.921517', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a075bb2-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '19ac2311ad1f533e5108083e05aa315a0627ed8fcf923af824d6c386b569f365'}]}, 'timestamp': '2025-10-05 09:40:38.922056', '_unique_id': '950b4bd20cf24eef94965f971c60c633'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.923 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 274779002 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 31348051 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4619ab8b-884d-4e9c-a5e2-0c257e98607b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 274779002, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.923461', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a079d52-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': 'f83095b46cbdf798cc0c1765879ca1e713d0f6c5bcdc1b0a6b924facc2f74fa4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31348051, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.923461', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a07a748-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '4cc32b210b61c606e94e228822152dae299cc7295935b977ba4419b48003aed4'}]}, 'timestamp': '2025-10-05 09:40:38.923988', '_unique_id': 'c3304f53174f4327be69c39a4fa14522'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.925 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.925 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b2b401fb-51ab-4115-9c14-c8cd2f0f0a0b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.925344', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a07e7ee-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': '3580051999194d75ffc92e299919a14f97615d05ea93dc2424c965163a2b1890'}]}, 'timestamp': '2025-10-05 09:40:38.925662', '_unique_id': 'dcd8ea205ab546bb92f5fbd16a935a15'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.926 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.927 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '17505eda-51fe-4cda-b67b-f7fdf0423e8f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.927111', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a082bbe-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': 'c0d9a2edad04edcc48a379b0878b7106716b0d0149bc6c670094a06cd5d7f97b'}]}, 'timestamp': '2025-10-05 09:40:38.927418', '_unique_id': 'a7529e357ccb45b5b04e288c0f834174'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.929 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 52.31640625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '33daa20a-de03-423a-8c41-c74787f2c499', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.31640625, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:40:38.929227', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '5a0ab3c0-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.167461083, 'message_signature': '7c72d9b3ba6ae46919948fd0a42252137c31e97151205868d32b9c1c7220fbf3'}]}, 'timestamp': '2025-10-05 09:40:38.943991', '_unique_id': '0226772e2ab344d2ba6573eb853186b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95090c56-fca5-456a-964f-f5e7fce46dba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.945455', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a0af88a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.06422309, 'message_signature': '0f92844737f60add532a380085119ee6fd81363d74769fa7fb4b56612539c6af'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.945455', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a0b0294-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.06422309, 'message_signature': 'fe5e1f7f3280156d3c0961a1280d7383ba5edd4ea1d0a231e9660c39e97d5c4a'}]}, 'timestamp': '2025-10-05 09:40:38.945987', '_unique_id': 'd17634bf3fa746e9a44135358aefea71'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 73912320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9be363e-ec10-47af-93ef-d849788d9a73', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73912320, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.947362', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a0b4452-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '073b6ecd78ec4c1f076052f276a24375e81c364f8bf80b01f61014951a94be5f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.947362', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a0b4e98-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '74c4b1dd0d9ec17d2fd63656bf4726ffeed371f2d4075b98263b5bf1d08e3a68'}]}, 'timestamp': '2025-10-05 09:40:38.947934', '_unique_id': '1529d41cb6b74651bc60055a07dac757'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.949 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.949 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae24ebc6-87c3-4eb3-89d7-4f4a641d1a2e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.949280', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a0b8eb2-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': 'e9f4a1be271c3eea5efd5384bbd5020f0d9cccd0447922dd0b26135e714db549'}]}, 'timestamp': '2025-10-05 09:40:38.949593', '_unique_id': '039276e715de42d2aad000c9b3932220'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.950 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 56810000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a948dc82-38bb-40a0-9a41-f469913796aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 56810000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:40:38.950975', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '5a0bd034-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.167461083, 'message_signature': '01cde07c3feabb676ef3856a7013c1889758aa5c4e249777106237f721272c8c'}]}, 'timestamp': '2025-10-05 09:40:38.951260', '_unique_id': '835bf64db2664e58923af0dfeb648985'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ce76e7c-ec08-4496-8186-bc1799712540', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.952624', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a0c108a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': '56fdae662ca1ffbfa335c7a8bc5205192de76c0d2e9f125c4126c4677222e732'}]}, 'timestamp': '2025-10-05 09:40:38.952918', '_unique_id': 'b23f644252c44b3c80d9dafaa8da25b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9751e18-9f5e-453f-ac81-405bb0c19698', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.954543', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a0c5b6c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': 'ba46e0c7fd52f870979dd7f49a49a12bdeba7ef1b5637c8d93937cfbb58de5da'}]}, 'timestamp': '2025-10-05 09:40:38.954855', '_unique_id': 'eb266070d4214f699645054c8b4728e2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '458919c2-218c-44b1-9d9b-7fa8e8474b06', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:40:38.956279', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '5a0c9ff0-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.055647546, 'message_signature': 'abc39b491b656b38deec04d3440125708faaabb95865963f686252facb2427ff'}]}, 'timestamp': '2025-10-05 09:40:38.956590', '_unique_id': '42072f771fd3459191d0e4ccb016be68'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 574 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.958 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5799a1c4-f5eb-4f27-9f13-463dc5c094a1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 574, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:40:38.957901', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a0cde84-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': '35fda4d2617bd06810bbe25dde95d3af44c2a22dd60563cfffeab3abb3aa80aa'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:40:38.957901', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a0ce8c0-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10718.091301171, 'message_signature': 'b77e164b83254817ec93141173f368f8691a0029ce4fb3e3d8b051852486fe2f'}]}, 'timestamp': '2025-10-05 09:40:38.958459', '_unique_id': '71c4320290144b3fa4da2b28dabb70d4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:40:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:40:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:40:39 localhost python3.9[258970]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.076 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.077 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12282MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.077 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.077 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.183 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.183 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.183 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.238 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:39 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:40:39 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Failed with result 'exit-code'. Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.700 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.707 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.728 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.730 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:40:39 localhost nova_compute[238314]: 2025-10-05 09:40:39.731 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:40 localhost python3.9[259106]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:40 localhost nova_compute[238314]: 2025-10-05 09:40:40.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:40:40 localhost nova_compute[238314]: 2025-10-05 09:40:40.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:40 localhost nova_compute[238314]: 2025-10-05 09:40:40.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:40:40 localhost nova_compute[238314]: 2025-10-05 09:40:40.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:40:40 localhost nova_compute[238314]: 2025-10-05 09:40:40.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:40:40 localhost nova_compute[238314]: 2025-10-05 09:40:40.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:40:40 localhost systemd[1]: tmp-crun.Ys85Oc.mount: Deactivated successfully. Oct 5 05:40:40 localhost podman[259197]: 2025-10-05 09:40:40.719647278 +0000 UTC m=+0.118079330 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:40:40 localhost podman[259197]: 2025-10-05 09:40:40.731744799 +0000 UTC m=+0.130176851 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:40:40 localhost python3.9[259228]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:41 localhost python3.9[259349]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:40:41 localhost nova_compute[238314]: 2025-10-05 09:40:41.732 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:41 localhost nova_compute[238314]: 2025-10-05 09:40:41.733 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:40:41 localhost nova_compute[238314]: 2025-10-05 09:40:41.733 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:40:42 localhost python3.9[259461]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.417 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.417 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.417 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.418 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.757 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.770 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.770 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.770 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:42 localhost nova_compute[238314]: 2025-10-05 09:40:42.771 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:42 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:40:42 localhost systemd[1]: var-lib-containers-storage-overlay-93e9de2b2ec23737f94de1f8bccf918a461ddca6ddb8186432fbf946e4c1bfc0-merged.mount: Deactivated successfully. Oct 5 05:40:42 localhost python3.9[259574]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-93e9de2b2ec23737f94de1f8bccf918a461ddca6ddb8186432fbf946e4c1bfc0-merged.mount: Deactivated successfully. Oct 5 05:40:43 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:40:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12681 DF PROTO=TCP SPT=57414 DPT=9102 SEQ=394370262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEECB4FE0000000001030307) Oct 5 05:40:43 localhost podman[248506]: @ - - [05/Oct/2025:09:35:10 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 132437 "" "Go-http-client/1.1" Oct 5 05:40:43 localhost podman_exporter[248766]: ts=2025-10-05T09:40:43.406Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Oct 5 05:40:43 localhost podman_exporter[248766]: ts=2025-10-05T09:40:43.406Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Oct 5 05:40:43 localhost podman_exporter[248766]: ts=2025-10-05T09:40:43.406Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Oct 5 05:40:43 localhost systemd[1]: session-59.scope: Deactivated successfully. Oct 5 05:40:43 localhost systemd[1]: session-59.scope: Consumed 13.283s CPU time. Oct 5 05:40:43 localhost systemd-logind[760]: Session 59 logged out. Waiting for processes to exit. Oct 5 05:40:43 localhost systemd-logind[760]: Removed session 59. Oct 5 05:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Oct 5 05:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Oct 5 05:40:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12682 DF PROTO=TCP SPT=57414 DPT=9102 SEQ=394370262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEECB91D0000000001030307) Oct 5 05:40:45 localhost nova_compute[238314]: 2025-10-05 09:40:45.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:45 localhost nova_compute[238314]: 2025-10-05 09:40:45.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12683 DF PROTO=TCP SPT=57414 DPT=9102 SEQ=394370262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEECC11D0000000001030307) Oct 5 05:40:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:40:47 localhost podman[259592]: 2025-10-05 09:40:47.685323552 +0000 UTC m=+0.085589390 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible) Oct 5 05:40:47 localhost podman[259592]: 2025-10-05 09:40:47.700861578 +0000 UTC m=+0.101127416 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:40:47 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:40:49 localhost sshd[259612]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:40:49 localhost systemd-logind[760]: New session 60 of user zuul. Oct 5 05:40:49 localhost systemd[1]: Started Session 60 of User zuul. Oct 5 05:40:50 localhost nova_compute[238314]: 2025-10-05 09:40:50.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12684 DF PROTO=TCP SPT=57414 DPT=9102 SEQ=394370262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEECD0DD0000000001030307) Oct 5 05:40:50 localhost python3.9[259725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:51 localhost python3.9[259835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:51 localhost podman[248506]: time="2025-10-05T09:40:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:40:51 localhost podman[248506]: @ - - [05/Oct/2025:09:40:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134058 "" "Go-http-client/1.1" Oct 5 05:40:51 localhost podman[248506]: @ - - [05/Oct/2025:09:40:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16462 "" "Go-http-client/1.1" Oct 5 05:40:51 localhost python3.9[259946]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:52 localhost openstack_network_exporter[250601]: ERROR 09:40:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:40:52 localhost openstack_network_exporter[250601]: ERROR 09:40:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:40:52 localhost openstack_network_exporter[250601]: ERROR 09:40:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:40:52 localhost openstack_network_exporter[250601]: ERROR 09:40:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:40:52 localhost openstack_network_exporter[250601]: Oct 5 05:40:52 localhost openstack_network_exporter[250601]: ERROR 09:40:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:40:52 localhost openstack_network_exporter[250601]: Oct 5 05:40:52 localhost python3.9[260059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:53 localhost python3.9[260145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657252.1081705-104-50059897715436/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:53 localhost python3.9[260253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:54 localhost python3.9[260339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657253.529124-149-248322761064236/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:40:54 localhost podman[260371]: 2025-10-05 09:40:54.685566412 +0000 UTC m=+0.091617927 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:40:54 localhost podman[260371]: 2025-10-05 09:40:54.727789806 +0000 UTC m=+0.133841251 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:40:54 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:40:55 localhost python3.9[260466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:55 localhost nova_compute[238314]: 2025-10-05 09:40:55.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:40:55 localhost nova_compute[238314]: 2025-10-05 09:40:55.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:40:55 localhost nova_compute[238314]: 2025-10-05 09:40:55.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:40:55 localhost nova_compute[238314]: 2025-10-05 09:40:55.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:40:55 localhost nova_compute[238314]: 2025-10-05 09:40:55.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:40:55 localhost nova_compute[238314]: 2025-10-05 09:40:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:40:55 localhost python3.9[260552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657254.5956612-149-169387467265706/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:40:56 localhost systemd[1]: tmp-crun.0SrYSQ.mount: Deactivated successfully. Oct 5 05:40:56 localhost podman[260650]: 2025-10-05 09:40:56.699967195 +0000 UTC m=+0.105473695 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:40:56 localhost podman[260650]: 2025-10-05 09:40:56.708870738 +0000 UTC m=+0.114377228 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001) Oct 5 05:40:56 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:40:56 localhost python3.9[260666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:57 localhost python3.9[260764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657255.7016592-149-133873237430240/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=54a34d8ddf3f0cf57fa3b1fdeab4efe454f3946f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:40:58 localhost podman[260836]: 2025-10-05 09:40:58.668170954 +0000 UTC m=+0.073223283 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:40:58 localhost podman[260836]: 2025-10-05 09:40:58.712838375 +0000 UTC m=+0.117890694 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:40:58 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:40:58 localhost python3.9[260895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:59 localhost python3.9[260981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657258.46277-323-183073305910214/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=c1fe3b6875a03fe9c93d1be48aa23d16c4ec18ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:00 localhost python3.9[261089]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:41:00 localhost nova_compute[238314]: 2025-10-05 09:41:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:00 localhost nova_compute[238314]: 2025-10-05 09:41:00.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:00 localhost nova_compute[238314]: 2025-10-05 09:41:00.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:00 localhost nova_compute[238314]: 2025-10-05 09:41:00.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:00 localhost nova_compute[238314]: 2025-10-05 09:41:00.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:00 localhost nova_compute[238314]: 2025-10-05 09:41:00.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:01 localhost python3.9[261201]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:02 localhost python3.9[261311]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:03 localhost python3.9[261368]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:41:03 localhost systemd[1]: tmp-crun.sSPb7G.mount: Deactivated successfully. Oct 5 05:41:03 localhost podman[261479]: 2025-10-05 09:41:03.60064404 +0000 UTC m=+0.091697088 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 05:41:03 localhost podman[261479]: 2025-10-05 09:41:03.614831809 +0000 UTC m=+0.105884847 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, tcib_managed=true) Oct 5 05:41:03 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:41:03 localhost python3.9[261478]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:04 localhost python3.9[261554]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:04 localhost python3.9[261664]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:05 localhost nova_compute[238314]: 2025-10-05 09:41:05.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:05 localhost nova_compute[238314]: 2025-10-05 09:41:05.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:05 localhost nova_compute[238314]: 2025-10-05 09:41:05.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:05 localhost nova_compute[238314]: 2025-10-05 09:41:05.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:05 localhost nova_compute[238314]: 2025-10-05 09:41:05.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:05 localhost nova_compute[238314]: 2025-10-05 09:41:05.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:05 localhost python3.9[261774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:06 localhost python3.9[261867]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:06 localhost podman[261994]: 2025-10-05 09:41:06.371310143 +0000 UTC m=+0.061982056 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, distribution-scope=public) Oct 5 05:41:06 localhost podman[261994]: 2025-10-05 09:41:06.500823485 +0000 UTC m=+0.191495378 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, release=553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:41:06 localhost python3.9[262081]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:07 localhost python3.9[262208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:41:08 localhost podman[262314]: 2025-10-05 09:41:08.680790895 +0000 UTC m=+0.081288874 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:41:08 localhost podman[262314]: 2025-10-05 09:41:08.698738405 +0000 UTC m=+0.099236384 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container) Oct 5 05:41:08 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:41:09 localhost python3.9[262387]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:41:09 localhost systemd[1]: Reloading. Oct 5 05:41:09 localhost systemd-sysv-generator[262412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:09 localhost systemd-rc-local-generator[262409]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:41:09 localhost podman[262425]: 2025-10-05 09:41:09.758463294 +0000 UTC m=+0.092920343 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:41:09 localhost podman[262425]: 2025-10-05 09:41:09.775490379 +0000 UTC m=+0.109947438 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:41:09 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:41:10 localhost python3.9[262558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:10 localhost nova_compute[238314]: 2025-10-05 09:41:10.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:10 localhost nova_compute[238314]: 2025-10-05 09:41:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:10 localhost nova_compute[238314]: 2025-10-05 09:41:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5033 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:10 localhost nova_compute[238314]: 2025-10-05 09:41:10.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:10 localhost nova_compute[238314]: 2025-10-05 09:41:10.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:10 localhost nova_compute[238314]: 2025-10-05 09:41:10.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:11 localhost python3.9[262615]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:12 localhost python3.9[262725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:12 localhost python3.9[262782]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:13 localhost python3.9[262892]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:41:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50641 DF PROTO=TCP SPT=49616 DPT=9102 SEQ=1429883068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEED2A2E0000000001030307) Oct 5 05:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:41:13 localhost systemd[1]: Reloading. Oct 5 05:41:13 localhost podman[262894]: 2025-10-05 09:41:13.499498907 +0000 UTC m=+0.093935267 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:41:13 localhost systemd-rc-local-generator[262933]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:13 localhost systemd-sysv-generator[262936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:13 localhost podman[262894]: 2025-10-05 09:41:13.545710744 +0000 UTC m=+0.140147064 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:41:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:13 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:41:13 localhost systemd[1]: Starting Create netns directory... Oct 5 05:41:13 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:41:13 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:41:13 localhost systemd[1]: Finished Create netns directory. Oct 5 05:41:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50642 DF PROTO=TCP SPT=49616 DPT=9102 SEQ=1429883068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEED2E1E0000000001030307) Oct 5 05:41:15 localhost nova_compute[238314]: 2025-10-05 09:41:15.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:15 localhost python3.9[263067]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:16 localhost python3.9[263177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50643 DF PROTO=TCP SPT=49616 DPT=9102 SEQ=1429883068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEED361D0000000001030307) Oct 5 05:41:16 localhost python3.9[263265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759657275.838483-734-266100995478512/.source.json _original_basename=.l3fhrwgx follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:17 localhost python3.9[263375]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:41:18 localhost systemd[1]: tmp-crun.lj4DpR.mount: Deactivated successfully. Oct 5 05:41:18 localhost podman[263486]: 2025-10-05 09:41:18.195626879 +0000 UTC m=+0.104196994 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:41:18 localhost podman[263486]: 2025-10-05 09:41:18.211952652 +0000 UTC m=+0.120522787 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:41:18 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:41:20 localhost python3.9[263702]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Oct 5 05:41:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:41:20.437 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:41:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:41:20.438 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:41:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:41:20.439 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:41:20 localhost nova_compute[238314]: 2025-10-05 09:41:20.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50644 DF PROTO=TCP SPT=49616 DPT=9102 SEQ=1429883068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEED45DD0000000001030307) Oct 5 05:41:21 localhost python3.9[263812]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:41:21 localhost podman[248506]: time="2025-10-05T09:41:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:41:21 localhost podman[248506]: @ - - [05/Oct/2025:09:41:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134056 "" "Go-http-client/1.1" Oct 5 05:41:21 localhost podman[248506]: @ - - [05/Oct/2025:09:41:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16463 "" "Go-http-client/1.1" Oct 5 05:41:22 localhost openstack_network_exporter[250601]: ERROR 09:41:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:41:22 localhost openstack_network_exporter[250601]: ERROR 09:41:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:22 localhost openstack_network_exporter[250601]: ERROR 09:41:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:22 localhost openstack_network_exporter[250601]: ERROR 09:41:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:41:22 localhost openstack_network_exporter[250601]: Oct 5 05:41:22 localhost openstack_network_exporter[250601]: ERROR 09:41:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:41:22 localhost openstack_network_exporter[250601]: Oct 5 05:41:23 localhost python3.9[263922]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:41:25 localhost nova_compute[238314]: 2025-10-05 09:41:25.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:25 localhost nova_compute[238314]: 2025-10-05 09:41:25.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:25 localhost nova_compute[238314]: 2025-10-05 09:41:25.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:25 localhost nova_compute[238314]: 2025-10-05 09:41:25.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:25 localhost nova_compute[238314]: 2025-10-05 09:41:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:25 localhost nova_compute[238314]: 2025-10-05 09:41:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:41:25 localhost podman[263967]: 2025-10-05 09:41:25.687061209 +0000 UTC m=+0.087531943 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 05:41:25 localhost podman[263967]: 2025-10-05 09:41:25.697735687 +0000 UTC m=+0.098206421 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 05:41:25 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:41:26 localhost podman[264080]: 2025-10-05 09:41:26.951952087 +0000 UTC m=+0.088073258 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:41:26 localhost podman[264080]: 2025-10-05 09:41:26.985932699 +0000 UTC m=+0.122053830 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:41:26 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:41:27 localhost python3[264079]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:41:27 localhost podman[264136]: Oct 5 05:41:27 localhost podman[264136]: 2025-10-05 09:41:27.397697696 +0000 UTC m=+0.084555386 container create 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:41:27 localhost podman[264136]: 2025-10-05 09:41:27.351323375 +0000 UTC m=+0.038181115 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 5 05:41:27 localhost python3[264079]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 5 05:41:28 localhost python3.9[264285]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:41:28 localhost systemd[1]: tmp-crun.JSmiWy.mount: Deactivated successfully. Oct 5 05:41:28 localhost podman[264398]: 2025-10-05 09:41:28.890931248 +0000 UTC m=+0.094499043 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:41:28 localhost podman[264398]: 2025-10-05 09:41:28.930926705 +0000 UTC m=+0.134494490 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 05:41:28 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:41:28 localhost python3.9[264397]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:29 localhost python3.9[264475]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:41:30 localhost python3.9[264584]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657289.4998627-998-36353719965449/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:30 localhost sshd[264585]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:41:30 localhost nova_compute[238314]: 2025-10-05 09:41:30.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:30 localhost nova_compute[238314]: 2025-10-05 09:41:30.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:30 localhost nova_compute[238314]: 2025-10-05 09:41:30.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:30 localhost nova_compute[238314]: 2025-10-05 09:41:30.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:30 localhost nova_compute[238314]: 2025-10-05 09:41:30.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:30 localhost nova_compute[238314]: 2025-10-05 09:41:30.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:30 localhost python3.9[264641]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:41:30 localhost systemd[1]: Reloading. Oct 5 05:41:31 localhost systemd-rc-local-generator[264663]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:31 localhost systemd-sysv-generator[264669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:31 localhost python3.9[264732]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:41:31 localhost systemd[1]: Reloading. Oct 5 05:41:32 localhost systemd-rc-local-generator[264758]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:32 localhost systemd-sysv-generator[264763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:32 localhost systemd[1]: Starting neutron_sriov_agent container... Oct 5 05:41:32 localhost systemd[1]: tmp-crun.dmaYU8.mount: Deactivated successfully. Oct 5 05:41:32 localhost systemd[1]: Started libcrun container. Oct 5 05:41:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a2f70ed8d72f7a2619c7361b83ed9124fde23631570886b88c80fd3a80ae/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a2f70ed8d72f7a2619c7361b83ed9124fde23631570886b88c80fd3a80ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:32 localhost podman[264773]: 2025-10-05 09:41:32.45730869 +0000 UTC m=+0.114657786 container init 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:41:32 localhost podman[264773]: 2025-10-05 09:41:32.470222554 +0000 UTC m=+0.127571640 container start 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent) Oct 5 05:41:32 localhost podman[264773]: neutron_sriov_agent Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + sudo -E kolla_set_configs Oct 5 05:41:32 localhost systemd[1]: Started neutron_sriov_agent container. Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Validating config file Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Copying service configuration files Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Writing out command to execute Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: ++ cat /run_command Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + CMD=/usr/bin/neutron-sriov-nic-agent Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + ARGS= Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + sudo kolla_copy_cacerts Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + [[ ! -n '' ]] Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + . kolla_extend_start Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: Running command: '/usr/bin/neutron-sriov-nic-agent' Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + umask 0022 Oct 5 05:41:32 localhost neutron_sriov_agent[264787]: + exec /usr/bin/neutron-sriov-nic-agent Oct 5 05:41:33 localhost python3.9[264910]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:41:34 localhost systemd[1]: Stopping neutron_sriov_agent container... Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.320 2 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.320 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.321 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.321 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.321 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.321 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.321 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005471150.localdomain'}#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.322 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-b327c613-fc18-489a-b461-6abe68361398 - - - - - -] RPC agent_id: nic-switch-agent.np0005471150.localdomain#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.326 2 INFO neutron.agent.agent_extensions_manager [None req-b327c613-fc18-489a-b461-6abe68361398 - - - - - -] Loaded agent extensions: ['qos']#033[00m Oct 5 05:41:34 localhost neutron_sriov_agent[264787]: 2025-10-05 09:41:34.326 2 INFO neutron.agent.agent_extensions_manager [None req-b327c613-fc18-489a-b461-6abe68361398 - - - - - -] Initializing agent extension 'qos'#033[00m Oct 5 05:41:34 localhost podman[264915]: 2025-10-05 09:41:34.405014105 +0000 UTC m=+0.091414765 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 05:41:34 localhost systemd[1]: libpod-252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234.scope: Deactivated successfully. Oct 5 05:41:34 localhost systemd[1]: libpod-252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234.scope: Consumed 1.827s CPU time. Oct 5 05:41:34 localhost podman[264916]: 2025-10-05 09:41:34.457047889 +0000 UTC m=+0.141935505 container died 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_id=neutron_sriov_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=neutron_sriov_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 05:41:34 localhost podman[264915]: 2025-10-05 09:41:34.526475807 +0000 UTC m=+0.212876457 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:41:34 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:41:34 localhost podman[264916]: 2025-10-05 09:41:34.568599025 +0000 UTC m=+0.253486571 container cleanup 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, tcib_managed=true) Oct 5 05:41:34 localhost podman[264916]: neutron_sriov_agent Oct 5 05:41:34 localhost podman[264944]: 2025-10-05 09:41:34.570954183 +0000 UTC m=+0.106817530 container cleanup 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:41:34 localhost podman[264958]: 2025-10-05 09:41:34.664028075 +0000 UTC m=+0.061750957 container cleanup 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:41:34 localhost podman[264958]: neutron_sriov_agent Oct 5 05:41:34 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Oct 5 05:41:34 localhost systemd[1]: Stopped neutron_sriov_agent container. Oct 5 05:41:34 localhost systemd[1]: Starting neutron_sriov_agent container... Oct 5 05:41:34 localhost systemd[1]: Started libcrun container. Oct 5 05:41:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a2f70ed8d72f7a2619c7361b83ed9124fde23631570886b88c80fd3a80ae/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f330a2f70ed8d72f7a2619c7361b83ed9124fde23631570886b88c80fd3a80ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:34 localhost podman[264970]: 2025-10-05 09:41:34.796717032 +0000 UTC m=+0.102661720 container init 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:41:34 localhost podman[264970]: 2025-10-05 09:41:34.807342699 +0000 UTC m=+0.113287387 container start 252be6530ca51b6d55fec700b7dddb9f1febe12189710514eedf621934083234 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '94d31085fa55e773dfb19d87b8e8327a47af8ab15d125b61e0a00d3277c80dec'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent) Oct 5 05:41:34 localhost podman[264970]: neutron_sriov_agent Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + sudo -E kolla_set_configs Oct 5 05:41:34 localhost systemd[1]: Started neutron_sriov_agent container. Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Validating config file Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Copying service configuration files Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Writing out command to execute Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/456176946c9b2bc12efd840abf43863005adc00f003c5dd0716ca424d2bec219 Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.conf Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: ++ cat /run_command Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + CMD=/usr/bin/neutron-sriov-nic-agent Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + ARGS= Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + sudo kolla_copy_cacerts Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + [[ ! -n '' ]] Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + . kolla_extend_start Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: Running command: '/usr/bin/neutron-sriov-nic-agent' Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + umask 0022 Oct 5 05:41:34 localhost neutron_sriov_agent[264984]: + exec /usr/bin/neutron-sriov-nic-agent Oct 5 05:41:35 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Oct 5 05:41:35 localhost systemd[1]: session-60.scope: Deactivated successfully. Oct 5 05:41:35 localhost systemd[1]: session-60.scope: Consumed 24.209s CPU time. Oct 5 05:41:35 localhost systemd-logind[760]: Removed session 60. Oct 5 05:41:35 localhost nova_compute[238314]: 2025-10-05 09:41:35.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:35 localhost nova_compute[238314]: 2025-10-05 09:41:35.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:35 localhost nova_compute[238314]: 2025-10-05 09:41:35.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5025 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:35 localhost nova_compute[238314]: 2025-10-05 09:41:35.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:35 localhost nova_compute[238314]: 2025-10-05 09:41:35.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:35 localhost nova_compute[238314]: 2025-10-05 09:41:35.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:36 localhost nova_compute[238314]: 2025-10-05 09:41:36.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:36 localhost nova_compute[238314]: 2025-10-05 09:41:36.398 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:36 localhost nova_compute[238314]: 2025-10-05 09:41:36.398 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.427 2 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.427 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.428 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.428 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.428 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.428 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.428 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005471150.localdomain'}#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.429 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-c00ca950-2cbf-4ebb-8836-2d791c65aca7 - - - - - -] RPC agent_id: nic-switch-agent.np0005471150.localdomain#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.433 2 INFO neutron.agent.agent_extensions_manager [None req-c00ca950-2cbf-4ebb-8836-2d791c65aca7 - - - - - -] Loaded agent extensions: ['qos']#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.433 2 INFO neutron.agent.agent_extensions_manager [None req-c00ca950-2cbf-4ebb-8836-2d791c65aca7 - - - - - -] Initializing agent extension 'qos'#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.637 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-c00ca950-2cbf-4ebb-8836-2d791c65aca7 - - - - - -] Agent initialized successfully, now running... #033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.637 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-c00ca950-2cbf-4ebb-8836-2d791c65aca7 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Oct 5 05:41:36 localhost neutron_sriov_agent[264984]: 2025-10-05 09:41:36.637 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-c00ca950-2cbf-4ebb-8836-2d791c65aca7 - - - - - -] Agent out of sync with plugin!#033[00m Oct 5 05:41:38 localhost nova_compute[238314]: 2025-10-05 09:41:38.393 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:39 localhost nova_compute[238314]: 2025-10-05 09:41:39.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:39 localhost nova_compute[238314]: 2025-10-05 09:41:39.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:41:39 localhost podman[265017]: 2025-10-05 09:41:39.669818003 +0000 UTC m=+0.076970697 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:41:39 localhost podman[265017]: 2025-10-05 09:41:39.683324204 +0000 UTC m=+0.090476868 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, release=1755695350, io.openshift.tags=minimal rhel9) Oct 5 05:41:39 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.400 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.400 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.401 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.401 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.401 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:41:40 localhost systemd[1]: tmp-crun.pcOEJh.mount: Deactivated successfully. Oct 5 05:41:40 localhost podman[265056]: 2025-10-05 09:41:40.710241549 +0000 UTC m=+0.093805253 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:41:40 localhost podman[265056]: 2025-10-05 09:41:40.717388466 +0000 UTC m=+0.100952180 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:41:40 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.857 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.916 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:41:40 localhost nova_compute[238314]: 2025-10-05 09:41:40.917 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:41:40 localhost sshd[265081]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:41:41 localhost systemd-logind[760]: New session 61 of user zuul. Oct 5 05:41:41 localhost systemd[1]: Started Session 61 of User zuul. Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.153 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.158 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12196MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.158 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.159 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.239 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.240 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.240 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.288 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.746 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.754 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.770 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.772 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:41:41 localhost nova_compute[238314]: 2025-10-05 09:41:41.773 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:41:42 localhost python3.9[265214]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:41:42 localhost nova_compute[238314]: 2025-10-05 09:41:42.773 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:42 localhost nova_compute[238314]: 2025-10-05 09:41:42.774 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:43 localhost nova_compute[238314]: 2025-10-05 09:41:43.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:43 localhost nova_compute[238314]: 2025-10-05 09:41:43.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:41:43 localhost nova_compute[238314]: 2025-10-05 09:41:43.379 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:41:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64192 DF PROTO=TCP SPT=44198 DPT=9102 SEQ=2565128935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEED9F5E0000000001030307) Oct 5 05:41:43 localhost python3.9[265328]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:41:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:41:44 localhost nova_compute[238314]: 2025-10-05 09:41:44.285 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:41:44 localhost nova_compute[238314]: 2025-10-05 09:41:44.285 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:41:44 localhost nova_compute[238314]: 2025-10-05 09:41:44.285 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:41:44 localhost nova_compute[238314]: 2025-10-05 09:41:44.286 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:41:44 localhost podman[265392]: 2025-10-05 09:41:44.307283859 +0000 UTC m=+0.095594475 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:41:44 localhost podman[265392]: 2025-10-05 09:41:44.3156216 +0000 UTC m=+0.103932216 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:41:44 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:41:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64193 DF PROTO=TCP SPT=44198 DPT=9102 SEQ=2565128935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEDA35D0000000001030307) Oct 5 05:41:44 localhost python3.9[265391]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:41:45 localhost nova_compute[238314]: 2025-10-05 09:41:45.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:45 localhost nova_compute[238314]: 2025-10-05 09:41:45.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:46 localhost nova_compute[238314]: 2025-10-05 09:41:46.222 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:41:46 localhost nova_compute[238314]: 2025-10-05 09:41:46.381 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:41:46 localhost nova_compute[238314]: 2025-10-05 09:41:46.382 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:41:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64194 DF PROTO=TCP SPT=44198 DPT=9102 SEQ=2565128935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEDAB5D0000000001030307) Oct 5 05:41:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:41:48 localhost podman[265527]: 2025-10-05 09:41:48.405912754 +0000 UTC m=+0.087948844 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 05:41:48 localhost podman[265527]: 2025-10-05 09:41:48.420729283 +0000 UTC m=+0.102765333 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3) Oct 5 05:41:48 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:41:48 localhost python3.9[265526]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:41:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64195 DF PROTO=TCP SPT=44198 DPT=9102 SEQ=2565128935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEDBB1E0000000001030307) Oct 5 05:41:50 localhost python3.9[265657]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:50 localhost nova_compute[238314]: 2025-10-05 09:41:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:50 localhost nova_compute[238314]: 2025-10-05 09:41:50.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:50 localhost nova_compute[238314]: 2025-10-05 09:41:50.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:50 localhost nova_compute[238314]: 2025-10-05 09:41:50.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:50 localhost nova_compute[238314]: 2025-10-05 09:41:50.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:50 localhost nova_compute[238314]: 2025-10-05 09:41:50.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:51 localhost python3.9[265767]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:51 localhost podman[248506]: time="2025-10-05T09:41:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:41:51 localhost podman[248506]: @ - - [05/Oct/2025:09:41:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136014 "" "Go-http-client/1.1" Oct 5 05:41:51 localhost podman[248506]: @ - - [05/Oct/2025:09:41:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16897 "" "Go-http-client/1.1" Oct 5 05:41:52 localhost openstack_network_exporter[250601]: ERROR 09:41:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:41:52 localhost openstack_network_exporter[250601]: Oct 5 05:41:52 localhost openstack_network_exporter[250601]: ERROR 09:41:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:41:52 localhost openstack_network_exporter[250601]: ERROR 09:41:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:52 localhost openstack_network_exporter[250601]: ERROR 09:41:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:52 localhost openstack_network_exporter[250601]: ERROR 09:41:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:41:52 localhost openstack_network_exporter[250601]: Oct 5 05:41:52 localhost python3.9[265877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:52 localhost python3.9[265987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:53 localhost python3.9[266097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:54 localhost python3.9[266207]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:54 localhost python3.9[266317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:55 localhost nova_compute[238314]: 2025-10-05 09:41:55.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:55 localhost nova_compute[238314]: 2025-10-05 09:41:55.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:41:55 localhost nova_compute[238314]: 2025-10-05 09:41:55.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:41:55 localhost nova_compute[238314]: 2025-10-05 09:41:55.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:55 localhost nova_compute[238314]: 2025-10-05 09:41:55.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:41:55 localhost nova_compute[238314]: 2025-10-05 09:41:55.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:41:55 localhost python3.9[266427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:41:56 localhost podman[266516]: 2025-10-05 09:41:56.465251514 +0000 UTC m=+0.078586454 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 05:41:56 localhost podman[266516]: 2025-10-05 09:41:56.475694296 +0000 UTC m=+0.089029256 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:41:56 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:41:56 localhost python3.9[266515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657315.248375-278-79981803623769/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:57 localhost python3.9[266642]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:41:57 localhost systemd[1]: tmp-crun.Slow85.mount: Deactivated successfully. Oct 5 05:41:57 localhost podman[266729]: 2025-10-05 09:41:57.688119777 +0000 UTC m=+0.098045256 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:41:57 localhost python3.9[266728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657316.7670639-323-155266131805435/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:57 localhost podman[266729]: 2025-10-05 09:41:57.697754985 +0000 UTC m=+0.107680444 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:41:57 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:41:58 localhost python3.9[266854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:58 localhost python3.9[266940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657317.8254848-323-195284076817469/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:59 localhost python3.9[267048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:41:59 localhost systemd[1]: tmp-crun.jLzmuw.mount: Deactivated successfully. Oct 5 05:41:59 localhost podman[267096]: 2025-10-05 09:41:59.689692528 +0000 UTC m=+0.095390949 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 05:41:59 localhost podman[267096]: 2025-10-05 09:41:59.731829257 +0000 UTC m=+0.137527668 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:41:59 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:41:59 localhost python3.9[267160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657318.943703-323-111381013129484/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=b2956b0163fd72e9e43e5e3b29ec10ef318c7d5a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:00 localhost nova_compute[238314]: 2025-10-05 09:42:00.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:00 localhost nova_compute[238314]: 2025-10-05 09:42:00.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:01 localhost python3.9[267268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:02 localhost python3.9[267354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657320.7577078-497-191296461801291/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=c1fe3b6875a03fe9c93d1be48aa23d16c4ec18ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:02 localhost python3.9[267462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:04 localhost python3.9[267548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657322.4827995-542-177948865576514/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:42:04 localhost podman[267648]: 2025-10-05 09:42:04.660418923 +0000 UTC m=+0.064464195 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:42:04 localhost podman[267648]: 2025-10-05 09:42:04.672928894 +0000 UTC m=+0.076974186 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 05:42:04 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:42:04 localhost python3.9[267667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:05 localhost python3.9[267761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657324.3774328-542-21929097742553/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:05 localhost nova_compute[238314]: 2025-10-05 09:42:05.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:05 localhost nova_compute[238314]: 2025-10-05 09:42:05.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:05 localhost nova_compute[238314]: 2025-10-05 09:42:05.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:42:05 localhost nova_compute[238314]: 2025-10-05 09:42:05.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:05 localhost nova_compute[238314]: 2025-10-05 09:42:05.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:05 localhost nova_compute[238314]: 2025-10-05 09:42:05.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:05 localhost python3.9[267869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:06 localhost python3.9[267924]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:06 localhost python3.9[268032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:07 localhost python3.9[268118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657326.551299-629-173671509809883/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:08 localhost python3.9[268226]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:42:08 localhost python3.9[268374]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:09 localhost python3.9[268515]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:42:09 localhost systemd[1]: tmp-crun.pckhM4.mount: Deactivated successfully. Oct 5 05:42:09 localhost podman[268577]: 2025-10-05 09:42:09.884223285 +0000 UTC m=+0.099173959 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41) Oct 5 05:42:09 localhost podman[268577]: 2025-10-05 09:42:09.902424732 +0000 UTC m=+0.117375406 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:42:09 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:42:09 localhost python3.9[268576]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:10 localhost python3.9[268720]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:10 localhost nova_compute[238314]: 2025-10-05 09:42:10.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:10 localhost nova_compute[238314]: 2025-10-05 09:42:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:10 localhost nova_compute[238314]: 2025-10-05 09:42:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:42:10 localhost nova_compute[238314]: 2025-10-05 09:42:10.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:10 localhost nova_compute[238314]: 2025-10-05 09:42:10.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:10 localhost nova_compute[238314]: 2025-10-05 09:42:10.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:42:10 localhost podman[268778]: 2025-10-05 09:42:10.935587089 +0000 UTC m=+0.080137658 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:42:10 localhost podman[268778]: 2025-10-05 09:42:10.943606451 +0000 UTC m=+0.088157050 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:42:10 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:42:11 localhost python3.9[268777]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:12 localhost python3.9[268911]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:12 localhost python3.9[269021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56009 DF PROTO=TCP SPT=51084 DPT=9102 SEQ=2996445626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE148E0000000001030307) Oct 5 05:42:13 localhost python3.9[269078]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:42:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56010 DF PROTO=TCP SPT=51084 DPT=9102 SEQ=2996445626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE189E0000000001030307) Oct 5 05:42:14 localhost podman[269189]: 2025-10-05 09:42:14.495543867 +0000 UTC m=+0.082265500 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:42:14 localhost podman[269189]: 2025-10-05 09:42:14.506889765 +0000 UTC m=+0.093611418 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:42:14 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:42:14 localhost python3.9[269188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:15 localhost python3.9[269269]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:15 localhost nova_compute[238314]: 2025-10-05 09:42:15.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:15 localhost python3.9[269379]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:42:15 localhost systemd[1]: Reloading. Oct 5 05:42:16 localhost systemd-rc-local-generator[269402]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:16 localhost systemd-sysv-generator[269408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56011 DF PROTO=TCP SPT=51084 DPT=9102 SEQ=2996445626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE209E0000000001030307) Oct 5 05:42:17 localhost python3.9[269527]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:17 localhost sshd[269546]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:42:17 localhost python3.9[269586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:18 localhost python3.9[269697]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:42:18 localhost systemd[1]: tmp-crun.IIkHEJ.mount: Deactivated successfully. Oct 5 05:42:18 localhost podman[269755]: 2025-10-05 09:42:18.584067396 +0000 UTC m=+0.104754781 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:42:18 localhost podman[269755]: 2025-10-05 09:42:18.596264281 +0000 UTC m=+0.116951686 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:42:18 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:42:18 localhost python3.9[269754]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:19 localhost python3.9[269883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:42:19 localhost systemd[1]: Reloading. Oct 5 05:42:19 localhost systemd-rc-local-generator[269912]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:19 localhost systemd-sysv-generator[269915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:19 localhost systemd[1]: Starting Create netns directory... Oct 5 05:42:19 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:42:19 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:42:19 localhost systemd[1]: Finished Create netns directory. Oct 5 05:42:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:42:20.438 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:42:20.440 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:42:20.442 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56012 DF PROTO=TCP SPT=51084 DPT=9102 SEQ=2996445626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE305E0000000001030307) Oct 5 05:42:20 localhost nova_compute[238314]: 2025-10-05 09:42:20.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:20 localhost nova_compute[238314]: 2025-10-05 09:42:20.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:20 localhost nova_compute[238314]: 2025-10-05 09:42:20.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:42:20 localhost nova_compute[238314]: 2025-10-05 09:42:20.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:20 localhost nova_compute[238314]: 2025-10-05 09:42:20.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:20 localhost nova_compute[238314]: 2025-10-05 09:42:20.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:20 localhost python3.9[270038]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:21 localhost podman[248506]: time="2025-10-05T09:42:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:42:21 localhost python3.9[270148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:21 localhost podman[248506]: @ - - [05/Oct/2025:09:42:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136014 "" "Go-http-client/1.1" Oct 5 05:42:21 localhost podman[248506]: @ - - [05/Oct/2025:09:42:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16897 "" "Go-http-client/1.1" Oct 5 05:42:22 localhost openstack_network_exporter[250601]: ERROR 09:42:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:42:22 localhost openstack_network_exporter[250601]: ERROR 09:42:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:22 localhost openstack_network_exporter[250601]: ERROR 09:42:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:22 localhost openstack_network_exporter[250601]: ERROR 09:42:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:42:22 localhost openstack_network_exporter[250601]: Oct 5 05:42:22 localhost openstack_network_exporter[250601]: ERROR 09:42:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:42:22 localhost openstack_network_exporter[250601]: Oct 5 05:42:22 localhost python3.9[270236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759657341.0079374-1073-29945875235752/.source.json _original_basename=.psisk3go follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:23 localhost python3.9[270347]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:24 localhost sshd[270546]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:42:25 localhost nova_compute[238314]: 2025-10-05 09:42:25.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:25 localhost nova_compute[238314]: 2025-10-05 09:42:25.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:25 localhost nova_compute[238314]: 2025-10-05 09:42:25.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:42:25 localhost nova_compute[238314]: 2025-10-05 09:42:25.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:25 localhost nova_compute[238314]: 2025-10-05 09:42:25.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:25 localhost nova_compute[238314]: 2025-10-05 09:42:25.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:26 localhost python3.9[270658]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Oct 5 05:42:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:42:26 localhost podman[270714]: 2025-10-05 09:42:26.687070452 +0000 UTC m=+0.088385461 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:42:26 localhost podman[270714]: 2025-10-05 09:42:26.721617022 +0000 UTC m=+0.122931981 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 05:42:26 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:42:27 localhost python3.9[270788]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:42:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:42:28 localhost podman[270900]: 2025-10-05 09:42:28.003214133 +0000 UTC m=+0.083650051 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:42:28 localhost podman[270900]: 2025-10-05 09:42:28.036877899 +0000 UTC m=+0.117313787 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent) Oct 5 05:42:28 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:42:28 localhost python3.9[270899]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:42:30 localhost systemd[1]: tmp-crun.iocU2W.mount: Deactivated successfully. Oct 5 05:42:30 localhost podman[270962]: 2025-10-05 09:42:30.681609073 +0000 UTC m=+0.087047264 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:42:30 localhost nova_compute[238314]: 2025-10-05 09:42:30.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:30 localhost podman[270962]: 2025-10-05 09:42:30.723919378 +0000 UTC m=+0.129357579 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:42:30 localhost nova_compute[238314]: 2025-10-05 09:42:30.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:30 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:42:32 localhost python3[271080]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:42:32 localhost podman[271118]: Oct 5 05:42:32 localhost podman[271118]: 2025-10-05 09:42:32.623337337 +0000 UTC m=+0.080456923 container create e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:42:32 localhost podman[271118]: 2025-10-05 09:42:32.577787284 +0000 UTC m=+0.034906910 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:42:32 localhost python3[271080]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:42:33 localhost python3.9[271264]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:42:34 localhost python3.9[271376]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:42:35 localhost podman[271432]: 2025-10-05 09:42:35.383775192 +0000 UTC m=+0.082465939 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 05:42:35 localhost podman[271432]: 2025-10-05 09:42:35.419629219 +0000 UTC m=+0.118319916 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:42:35 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:42:35 localhost python3.9[271431]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:42:35 localhost nova_compute[238314]: 2025-10-05 09:42:35.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:35 localhost nova_compute[238314]: 2025-10-05 09:42:35.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:42:35 localhost nova_compute[238314]: 2025-10-05 09:42:35.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5035 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:42:35 localhost nova_compute[238314]: 2025-10-05 09:42:35.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:35 localhost nova_compute[238314]: 2025-10-05 09:42:35.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:42:35 localhost nova_compute[238314]: 2025-10-05 09:42:35.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:36 localhost python3.9[271559]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657355.5530226-1337-77381827238724/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:36 localhost sshd[271576]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:42:37 localhost nova_compute[238314]: 2025-10-05 09:42:37.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:37 localhost nova_compute[238314]: 2025-10-05 09:42:37.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:42:37 localhost python3.9[271617]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:42:37 localhost systemd[1]: Reloading. Oct 5 05:42:37 localhost systemd-rc-local-generator[271640]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:37 localhost systemd-sysv-generator[271646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:38 localhost python3.9[271708]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:42:38 localhost systemd[1]: Reloading. Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.831 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.832 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.833 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.858 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1213559769 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.859 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 162365672 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dff98a55-7b63-4627-8e75-0c3607d0fb68', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1213559769, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.833179', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a184568e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '3d473250c664204bd1fc32c67f1dddb011c13af58ca7261cf9f9cb24884c61eb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 162365672, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.833179', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a1846c28-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '62f35a9eda0b967c049d192cc65d9eab6b85d1783a2122deaf20a42d32a39ae2'}]}, 'timestamp': '2025-10-05 09:42:38.860004', '_unique_id': '05f5bc2abce9467083ad59dd488130e2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.861 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.863 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.886 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.886 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '83a1ac8c-bcde-4113-b48b-213f99c07973', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.863804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a18886a0-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.087757929, 'message_signature': 'f2e274d972514ac2850f150f5748ab07d31e768d4b44b7a7af0414112a45f0c9'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.863804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a1889924-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.087757929, 'message_signature': '9eb7e8fcd7d46ed6b16aa3f14fe465e1e888ee455c8e19bc245b45f5f01bd413'}]}, 'timestamp': '2025-10-05 09:42:38.887364', '_unique_id': '3685301863824bf3888c70a80c28eebd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.888 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.889 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.889 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.890 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e03e4158-ff4e-4ccd-9986-70d023327b20', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.889940', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a1890fb2-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '84bf2b14f709f9cccefb81dd8557d985762d44f6f689d94fc48446c510053812'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.889940', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a189220e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': 'f56c674efcfc9fdf5a8bc2313d15b9f9c737a2ab9412c98aa7b99aec09121554'}]}, 'timestamp': '2025-10-05 09:42:38.890857', '_unique_id': '860372354557482eb63cb27602b2c77a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.892 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.896 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8782 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14bf2519-c96d-4ff4-b543-be9899ddeaaa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8782, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.893034', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a18a00e8-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': '8de58403640e72c4d287fc671f500ef631d327dd8c40d0a3a8550f53e9834090'}]}, 'timestamp': '2025-10-05 09:42:38.896621', '_unique_id': '492f8ecc10af42f3845513a3a8267ae0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.898 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.898 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '71f87b7f-13f3-480c-a454-73141d041ccd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.898791', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a18a6970-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': '895bdd10f5ef361ebe79f5e9a336e80836ab06e0d896e8906883a6c0fed2f6bc'}]}, 'timestamp': '2025-10-05 09:42:38.899268', '_unique_id': 'b590fb0d72024563bb53df571c867310'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.900 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.901 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.901 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8bf5f60b-a8f9-463a-816e-f1ea27dbf36b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.901423', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a18ad068-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': 'b59d830ee1fce7659ace35ab8513c646fe7a5486af91cb179b126c96e62e29da'}]}, 'timestamp': '2025-10-05 09:42:38.901902', '_unique_id': '92d6d13ddc6a4f1b838cc377b59eb3df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.903 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.904 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'be630381-789f-4d4f-bbb6-988de59cbd9f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.904048', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a18b3684-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': '5a7f7c18dce4980114d6d995f34d74ace3157dd37f84f177741f948e89dea1ba'}]}, 'timestamp': '2025-10-05 09:42:38.904545', '_unique_id': 'e47d4779aa11484b97afa6d4a618b92e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.906 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 274779002 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.907 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 31348051 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '63623567-09e7-43e1-83d0-a9ef9206b2b9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 274779002, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.906661', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a18b9c64-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '1c90274f05496a98fbbd0a02d43be4ba76a07f5db35b6541fa0ef416327469af'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31348051, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.906661', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a18bac5e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '9228db57fc0c83cba7327c92b1d8407d12d28c757a878b1ea9c8180627c90898'}]}, 'timestamp': '2025-10-05 09:42:38.907537', '_unique_id': '41d158072d2745eeb7de938d40c9f031'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.909 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '84737aea-c501-4842-8de4-53b9dc413569', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.909901', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a18c1c48-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': 'e780aed22e95b9a83ff2ec437cbb0a9844aed18f4a899cb7020a9e3d14692841'}]}, 'timestamp': '2025-10-05 09:42:38.910458', '_unique_id': 'ef1f99a48dd6448fa5becdc3aeaeee3e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.912 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1b5b7a60-0ee3-4b9c-8d3d-46b995a579db', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.912568', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a18c8354-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': 'a07a8a33157ce9a78fd852248b313dd3ce4e97423fa959ba0c3904351cb99ed5'}]}, 'timestamp': '2025-10-05 09:42:38.913035', '_unique_id': 'd599b7777e6344f59410bee1c46eed93'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.915 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 574 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16c9e76d-293b-44ea-bec1-66e4b58bb877', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 574, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.915295', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a18cef06-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '9ff97618a7eb228e3737f44978bec1b9e8023db6edd6b1d8aa0a1c89e6959d28'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.915295', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a18cff32-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': 'd1f4e9ac54d3f38f5725990053a9fb096ef2d5dde8d713fd9586c317f0205ef3'}]}, 'timestamp': '2025-10-05 09:42:38.916177', '_unique_id': '6357915cd3e941fa800db5278c4dc116'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.918 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.918 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 73912320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.918 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa1b6dac-0e9f-4025-820e-5504f8954877', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73912320, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.918416', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a18d6904-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': 'b36e1e420210818fe000d91e17a85a4c0167a8d2c6dbfa6802c1c5204c55b461'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.918416', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a18d7ec6-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '184c636d8b3cf43897c815e03f1431ca00f5031584863d6d0c926cfe915a837c'}]}, 'timestamp': '2025-10-05 09:42:38.919496', '_unique_id': '1786443c440a4d1496e4b3fe6ec72525'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.921 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.921 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:42:38 localhost systemd-sysv-generator[271738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 52.31640625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost systemd-rc-local-generator[271733]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5ff77ab8-c817-4958-819c-91167fcabf9c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.31640625, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:42:38.922100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'a191aa28-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.169790635, 'message_signature': 'aca70d1e6ec662ae9576074423ccf0b4f7af77ddbd07e33a1527fd33dd0d6bee'}]}, 'timestamp': '2025-10-05 09:42:38.946857', '_unique_id': '514a8e86a3844aa7843eb0374611b6cf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.949 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.949 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9c8d769d-8801-4754-8fbc-731807ea04b9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.949639', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a1922c8c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': 'f28b3250a9fbe692e13a26972c7cae50f923b14eb5f72326ae110d42a512aba3'}]}, 'timestamp': '2025-10-05 09:42:38.950155', '_unique_id': '0cf215bfe9fd4a77a160673032fa8f4c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6b91f777-ae2b-431e-8e4b-1ffa8c1489f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.953153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a192b68e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '79cef4eaf238e83c663850a875af90358d4c0644fa275dfb6229fee91d9855f5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.953153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a192c7fa-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.057088655, 'message_signature': '384bb6d58e25de20670bd10b022bfecdf7ab4f7325d4d68fd068c640c04b525c'}]}, 'timestamp': '2025-10-05 09:42:38.954100', '_unique_id': '60f668db37074a678e9d851855553059'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.956 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9103cd40-56fe-48a7-8f20-413a659806ea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.956467', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a19336b8-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.087757929, 'message_signature': 'c6230d0fd98a125c1f683f4395044b120dd673a0d00f79abbfa73f57d2ef4882'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.956467', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a19348ce-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.087757929, 'message_signature': 'c1c8fa5b0cc1b2f15af4610a3c4788dfd0e4f1f2948d834aec616a0f543af849'}]}, 'timestamp': '2025-10-05 09:42:38.957458', '_unique_id': '0997c4449fd8401cb5bb1795cbb87681'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.959 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 57820000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '573568e5-0149-46ec-8f54-6d28e7e381e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 57820000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:42:38.959769', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'a193b76e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.169790635, 'message_signature': '231b0187377daf8b0ffa882ba9322a68cf31f22a79c178d794f56c7eb6940ab4'}]}, 'timestamp': '2025-10-05 09:42:38.960245', '_unique_id': 'a54773e768c14ee4b955e9c7968fa627'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.962 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.962 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9675575d-831a-478a-b2f4-81d12632b542', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.962568', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a19425aa-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': '7baf3dea1679603643771d57860876b15f5fc8e69b3d69e1f68218498a5d8088'}]}, 'timestamp': '2025-10-05 09:42:38.963083', '_unique_id': '815e5b38e9e84b01a9b281d6a639f2f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.965 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.965 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.966 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2ae18f61-b377-4d6d-9059-8fbce95294fe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:42:38.965765', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a194a1d8-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.087757929, 'message_signature': 'd137a708097e9b9b5e8cda196203ff9787f4e5c87dd328fee04965ad6a7d33ce'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:42:38.965765', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a194b506-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.087757929, 'message_signature': '254c0c4a23e6116f1f04b718a32073fc006eb8087a2bedd4f60e2259442b5d45'}]}, 'timestamp': '2025-10-05 09:42:38.966728', '_unique_id': '835e2ad9074742688b7853ee8e1f3e1b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.967 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.968 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.968 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd4c01761-4cad-4cca-baa7-456f412e2d87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.968428', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a19505e2-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': 'dac62ea0f4ca88ab4d912f35bcc0bddc865abe4305d196ee233961034fda8c72'}]}, 'timestamp': '2025-10-05 09:42:38.968716', '_unique_id': 'fe988175bc724d748fd6b0d7e62706eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57593ec4-34a6-4c6b-bca9-eb2c25e0387b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:42:38.970090', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'a19546c4-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10838.116958822, 'message_signature': 'd1194e2f4bd4cf716205ab4d4ec843298c0a9428a65f030fb6b7afd23c60f40a'}]}, 'timestamp': '2025-10-05 09:42:38.970378', '_unique_id': 'a83ac88b16484b19aa8b04cd77f9d956'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:42:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:42:38.970 12 ERROR oslo_messaging.notify.messaging Oct 5 05:42:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:39 localhost systemd[1]: Starting neutron_dhcp_agent container... Oct 5 05:42:39 localhost systemd[1]: tmp-crun.268Hqm.mount: Deactivated successfully. Oct 5 05:42:39 localhost systemd[1]: Started libcrun container. Oct 5 05:42:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607b44990ca7902e75505313b2f74de2bfc3c12246b07d35a54b00d9acfa9e2d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607b44990ca7902e75505313b2f74de2bfc3c12246b07d35a54b00d9acfa9e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:39 localhost nova_compute[238314]: 2025-10-05 09:42:39.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:39 localhost podman[271749]: 2025-10-05 09:42:39.388967007 +0000 UTC m=+0.133050360 container init e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:42:39 localhost podman[271749]: 2025-10-05 09:42:39.404423312 +0000 UTC m=+0.148506665 container start e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_managed=true, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:42:39 localhost podman[271749]: neutron_dhcp_agent Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + sudo -E kolla_set_configs Oct 5 05:42:39 localhost systemd[1]: Started neutron_dhcp_agent container. Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Validating config file Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Copying service configuration files Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Writing out command to execute Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/456176946c9b2bc12efd840abf43863005adc00f003c5dd0716ca424d2bec219 Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: ++ cat /run_command Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + CMD=/usr/bin/neutron-dhcp-agent Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + ARGS= Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + sudo kolla_copy_cacerts Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + [[ ! -n '' ]] Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + . kolla_extend_start Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: Running command: '/usr/bin/neutron-dhcp-agent' Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + umask 0022 Oct 5 05:42:39 localhost neutron_dhcp_agent[271763]: + exec /usr/bin/neutron-dhcp-agent Oct 5 05:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:42:40 localhost podman[271796]: 2025-10-05 09:42:40.172200793 +0000 UTC m=+0.078447388 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container) Oct 5 05:42:40 localhost podman[271796]: 2025-10-05 09:42:40.211875604 +0000 UTC m=+0.118122229 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git) Oct 5 05:42:40 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:42:40 localhost systemd[1]: tmp-crun.yaF3HI.mount: Deactivated successfully. Oct 5 05:42:40 localhost nova_compute[238314]: 2025-10-05 09:42:40.373 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost nova_compute[238314]: 2025-10-05 09:42:40.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost nova_compute[238314]: 2025-10-05 09:42:40.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost nova_compute[238314]: 2025-10-05 09:42:40.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:40 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:40.777 271767 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:42:40 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:40.778 271767 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Oct 5 05:42:41 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:41.141 271767 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 5 05:42:41 localhost python3.9[271907]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:42:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:42:41 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:41.272 271767 INFO neutron.agent.dhcp.agent [None req-ee1b4570-384b-43c2-8941-31fdaf334c39 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 05:42:41 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:41.272 271767 INFO neutron.agent.dhcp.agent [None req-ee1b4570-384b-43c2-8941-31fdaf334c39 - - - - - -] Synchronizing state complete#033[00m Oct 5 05:42:41 localhost systemd[1]: tmp-crun.nt9T99.mount: Deactivated successfully. Oct 5 05:42:41 localhost podman[271910]: 2025-10-05 09:42:41.352018256 +0000 UTC m=+0.088429812 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:42:41 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:41.355 271767 INFO neutron.agent.dhcp.agent [None req-ee1b4570-384b-43c2-8941-31fdaf334c39 - - - - - -] DHCP agent started#033[00m Oct 5 05:42:41 localhost podman[271910]: 2025-10-05 09:42:41.365382844 +0000 UTC m=+0.101794360 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:42:41 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.394 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.395 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.395 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.396 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.396 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.845 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.961 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:42:41 localhost nova_compute[238314]: 2025-10-05 09:42:41.961 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.175 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.176 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12059MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.177 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.177 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.260 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.261 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.261 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:42:42 localhost systemd[1]: Stopping neutron_dhcp_agent container... Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.350 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:42:42 localhost neutron_dhcp_agent[271763]: 2025-10-05 09:42:42.397 271767 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Oct 5 05:42:42 localhost systemd[1]: libpod-e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef.scope: Deactivated successfully. Oct 5 05:42:42 localhost podman[271957]: 2025-10-05 09:42:42.713585356 +0000 UTC m=+0.412965546 container died e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:42:42 localhost systemd[1]: libpod-e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef.scope: Consumed 2.082s CPU time. Oct 5 05:42:42 localhost podman[271957]: 2025-10-05 09:42:42.767381926 +0000 UTC m=+0.466762096 container cleanup e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Oct 5 05:42:42 localhost podman[271957]: neutron_dhcp_agent Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.797 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.805 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.828 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.831 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:42:42 localhost nova_compute[238314]: 2025-10-05 09:42:42.831 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:42 localhost podman[272019]: error opening file `/run/crun/e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef/status`: No such file or directory Oct 5 05:42:42 localhost podman[272006]: 2025-10-05 09:42:42.856766054 +0000 UTC m=+0.061628655 container cleanup e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_id=neutron_dhcp, container_name=neutron_dhcp_agent) Oct 5 05:42:42 localhost podman[272006]: neutron_dhcp_agent Oct 5 05:42:42 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Oct 5 05:42:42 localhost systemd[1]: Stopped neutron_dhcp_agent container. Oct 5 05:42:42 localhost systemd[1]: Starting neutron_dhcp_agent container... Oct 5 05:42:43 localhost systemd[1]: Started libcrun container. Oct 5 05:42:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607b44990ca7902e75505313b2f74de2bfc3c12246b07d35a54b00d9acfa9e2d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/607b44990ca7902e75505313b2f74de2bfc3c12246b07d35a54b00d9acfa9e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:43 localhost podman[272021]: 2025-10-05 09:42:43.015289003 +0000 UTC m=+0.124364411 container init e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251001, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=neutron_dhcp) Oct 5 05:42:43 localhost podman[272021]: 2025-10-05 09:42:43.024362062 +0000 UTC m=+0.133437470 container start e56283f7f1770e3d5debd533e4705cba19233ab607d9c855bff0de3cf1a276ef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'e609c0ce0ce1865e90529704591dc894ac482ec0984b40d0376042e8d3eaf630'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:42:43 localhost podman[272021]: neutron_dhcp_agent Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + sudo -E kolla_set_configs Oct 5 05:42:43 localhost systemd[1]: Started neutron_dhcp_agent container. Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Validating config file Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Copying service configuration files Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Writing out command to execute Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/456176946c9b2bc12efd840abf43863005adc00f003c5dd0716ca424d2bec219 Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.conf Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: ++ cat /run_command Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + CMD=/usr/bin/neutron-dhcp-agent Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + ARGS= Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + sudo kolla_copy_cacerts Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + [[ ! -n '' ]] Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + . kolla_extend_start Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: Running command: '/usr/bin/neutron-dhcp-agent' Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + umask 0022 Oct 5 05:42:43 localhost neutron_dhcp_agent[272036]: + exec /usr/bin/neutron-dhcp-agent Oct 5 05:42:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57324 DF PROTO=TCP SPT=54688 DPT=9102 SEQ=4006916642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE89BD0000000001030307) Oct 5 05:42:43 localhost systemd[1]: session-61.scope: Deactivated successfully. Oct 5 05:42:43 localhost systemd[1]: session-61.scope: Consumed 35.104s CPU time. Oct 5 05:42:43 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Oct 5 05:42:43 localhost systemd-logind[760]: Removed session 61. Oct 5 05:42:43 localhost nova_compute[238314]: 2025-10-05 09:42:43.832 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:43 localhost nova_compute[238314]: 2025-10-05 09:42:43.832 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 09:42:44.266 272040 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:42:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 09:42:44.266 272040 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:42:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57325 DF PROTO=TCP SPT=54688 DPT=9102 SEQ=4006916642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE8DDE0000000001030307) Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.450 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.450 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.451 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:42:44 localhost nova_compute[238314]: 2025-10-05 09:42:44.451 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:42:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:42:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 09:42:44.634 272040 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 5 05:42:44 localhost podman[272069]: 2025-10-05 09:42:44.672204005 +0000 UTC m=+0.080962657 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:42:44 localhost podman[272069]: 2025-10-05 09:42:44.680824902 +0000 UTC m=+0.089583584 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:42:44 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:42:45 localhost nova_compute[238314]: 2025-10-05 09:42:45.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:45 localhost neutron_dhcp_agent[272036]: 2025-10-05 09:42:45.865 272040 INFO neutron.agent.dhcp.agent [None req-79fc2a38-a450-4417-b58b-f1b9396cd79e - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 05:42:45 localhost neutron_dhcp_agent[272036]: 2025-10-05 09:42:45.865 272040 INFO neutron.agent.dhcp.agent [None req-79fc2a38-a450-4417-b58b-f1b9396cd79e - - - - - -] Synchronizing state complete#033[00m Oct 5 05:42:45 localhost nova_compute[238314]: 2025-10-05 09:42:45.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:45 localhost ovn_metadata_agent[163429]: 2025-10-05 09:42:45.883 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:42:45 localhost ovn_metadata_agent[163429]: 2025-10-05 09:42:45.884 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 05:42:45 localhost ovn_metadata_agent[163429]: 2025-10-05 09:42:45.885 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:42:45 localhost neutron_dhcp_agent[272036]: 2025-10-05 09:42:45.903 272040 INFO neutron.agent.dhcp.agent [None req-79fc2a38-a450-4417-b58b-f1b9396cd79e - - - - - -] DHCP agent started#033[00m Oct 5 05:42:46 localhost nova_compute[238314]: 2025-10-05 09:42:46.054 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:42:46 localhost nova_compute[238314]: 2025-10-05 09:42:46.076 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:42:46 localhost nova_compute[238314]: 2025-10-05 09:42:46.077 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:42:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57326 DF PROTO=TCP SPT=54688 DPT=9102 SEQ=4006916642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEE95DD0000000001030307) Oct 5 05:42:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:42:49 localhost podman[272094]: 2025-10-05 09:42:49.67538411 +0000 UTC m=+0.086903961 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:42:49 localhost podman[272094]: 2025-10-05 09:42:49.684928182 +0000 UTC m=+0.096448033 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:42:49 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:42:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57327 DF PROTO=TCP SPT=54688 DPT=9102 SEQ=4006916642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEEA59D0000000001030307) Oct 5 05:42:50 localhost nova_compute[238314]: 2025-10-05 09:42:50.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:51 localhost podman[248506]: time="2025-10-05T09:42:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:42:51 localhost podman[248506]: @ - - [05/Oct/2025:09:42:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138319 "" "Go-http-client/1.1" Oct 5 05:42:51 localhost podman[248506]: @ - - [05/Oct/2025:09:42:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17340 "" "Go-http-client/1.1" Oct 5 05:42:52 localhost openstack_network_exporter[250601]: ERROR 09:42:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:42:52 localhost openstack_network_exporter[250601]: ERROR 09:42:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:52 localhost openstack_network_exporter[250601]: ERROR 09:42:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:52 localhost openstack_network_exporter[250601]: ERROR 09:42:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:42:52 localhost openstack_network_exporter[250601]: Oct 5 05:42:52 localhost openstack_network_exporter[250601]: ERROR 09:42:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:42:52 localhost openstack_network_exporter[250601]: Oct 5 05:42:55 localhost nova_compute[238314]: 2025-10-05 09:42:55.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:42:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:42:57 localhost podman[272114]: 2025-10-05 09:42:57.656267396 +0000 UTC m=+0.065439480 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 05:42:57 localhost podman[272114]: 2025-10-05 09:42:57.690980121 +0000 UTC m=+0.100152205 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:42:57 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:42:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:42:58 localhost podman[272134]: 2025-10-05 09:42:58.666765593 +0000 UTC m=+0.075145708 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:42:58 localhost podman[272134]: 2025-10-05 09:42:58.671892274 +0000 UTC m=+0.080272369 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Oct 5 05:42:58 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:43:00 localhost nova_compute[238314]: 2025-10-05 09:43:00.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:00 localhost nova_compute[238314]: 2025-10-05 09:43:00.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:00 localhost nova_compute[238314]: 2025-10-05 09:43:00.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:43:00 localhost nova_compute[238314]: 2025-10-05 09:43:00.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:00 localhost nova_compute[238314]: 2025-10-05 09:43:00.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:00 localhost nova_compute[238314]: 2025-10-05 09:43:00.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:43:01 localhost podman[272152]: 2025-10-05 09:43:01.664832203 +0000 UTC m=+0.077501631 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS) Oct 5 05:43:01 localhost podman[272152]: 2025-10-05 09:43:01.702833778 +0000 UTC m=+0.115503186 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:43:01 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:43:05 localhost podman[272178]: 2025-10-05 09:43:05.673467583 +0000 UTC m=+0.082012736 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:43:05 localhost podman[272178]: 2025-10-05 09:43:05.684833326 +0000 UTC m=+0.093378519 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:43:05 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:43:05 localhost nova_compute[238314]: 2025-10-05 09:43:05.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:05 localhost nova_compute[238314]: 2025-10-05 09:43:05.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:05 localhost nova_compute[238314]: 2025-10-05 09:43:05.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:43:05 localhost nova_compute[238314]: 2025-10-05 09:43:05.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:05 localhost nova_compute[238314]: 2025-10-05 09:43:05.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:05 localhost nova_compute[238314]: 2025-10-05 09:43:05.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:43:10 localhost podman[272250]: 2025-10-05 09:43:10.650924753 +0000 UTC m=+0.064825953 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:43:10 localhost podman[272250]: 2025-10-05 09:43:10.664743683 +0000 UTC m=+0.078644853 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:43:10 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:43:10 localhost nova_compute[238314]: 2025-10-05 09:43:10.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:10 localhost nova_compute[238314]: 2025-10-05 09:43:10.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:43:11 localhost podman[272305]: 2025-10-05 09:43:11.591489547 +0000 UTC m=+0.077386130 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:43:11 localhost podman[272305]: 2025-10-05 09:43:11.603855596 +0000 UTC m=+0.089752149 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:43:11 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:43:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13516 DF PROTO=TCP SPT=33380 DPT=9102 SEQ=819558180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEEFEED0000000001030307) Oct 5 05:43:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13517 DF PROTO=TCP SPT=33380 DPT=9102 SEQ=819558180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF02DD0000000001030307) Oct 5 05:43:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:43:15 localhost podman[272329]: 2025-10-05 09:43:15.678771458 +0000 UTC m=+0.090537650 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:43:15 localhost podman[272329]: 2025-10-05 09:43:15.713726608 +0000 UTC m=+0.125492780 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:43:15 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:43:15 localhost nova_compute[238314]: 2025-10-05 09:43:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:15 localhost nova_compute[238314]: 2025-10-05 09:43:15.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:15 localhost nova_compute[238314]: 2025-10-05 09:43:15.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:43:15 localhost nova_compute[238314]: 2025-10-05 09:43:15.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:15 localhost nova_compute[238314]: 2025-10-05 09:43:15.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:15 localhost nova_compute[238314]: 2025-10-05 09:43:15.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:15 localhost ovn_controller[157794]: 2025-10-05T09:43:15Z|00046|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory Oct 5 05:43:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13518 DF PROTO=TCP SPT=33380 DPT=9102 SEQ=819558180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF0ADD0000000001030307) Oct 5 05:43:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:43:20.439 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:43:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:43:20.440 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:43:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:43:20.441 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:43:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13519 DF PROTO=TCP SPT=33380 DPT=9102 SEQ=819558180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF1A9D0000000001030307) Oct 5 05:43:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:43:20 localhost podman[272351]: 2025-10-05 09:43:20.675988203 +0000 UTC m=+0.084980070 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd) Oct 5 05:43:20 localhost podman[272351]: 2025-10-05 09:43:20.688708971 +0000 UTC m=+0.097700838 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:43:20 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:43:20 localhost nova_compute[238314]: 2025-10-05 09:43:20.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:21 localhost podman[248506]: time="2025-10-05T09:43:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:43:21 localhost podman[248506]: @ - - [05/Oct/2025:09:43:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138319 "" "Go-http-client/1.1" Oct 5 05:43:21 localhost podman[248506]: @ - - [05/Oct/2025:09:43:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17341 "" "Go-http-client/1.1" Oct 5 05:43:22 localhost openstack_network_exporter[250601]: ERROR 09:43:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:43:22 localhost openstack_network_exporter[250601]: ERROR 09:43:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:22 localhost openstack_network_exporter[250601]: ERROR 09:43:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:22 localhost openstack_network_exporter[250601]: ERROR 09:43:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:43:22 localhost openstack_network_exporter[250601]: Oct 5 05:43:22 localhost openstack_network_exporter[250601]: ERROR 09:43:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:43:22 localhost openstack_network_exporter[250601]: Oct 5 05:43:25 localhost nova_compute[238314]: 2025-10-05 09:43:25.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:43:28 localhost podman[272370]: 2025-10-05 09:43:28.68160164 +0000 UTC m=+0.088611951 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:43:28 localhost podman[272370]: 2025-10-05 09:43:28.698821647 +0000 UTC m=+0.105831958 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 05:43:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:43:28 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:43:28 localhost systemd[1]: tmp-crun.r9w13p.mount: Deactivated successfully. Oct 5 05:43:28 localhost podman[272389]: 2025-10-05 09:43:28.813289926 +0000 UTC m=+0.089573139 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:43:28 localhost podman[272389]: 2025-10-05 09:43:28.823727841 +0000 UTC m=+0.100011034 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 05:43:28 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:43:30 localhost nova_compute[238314]: 2025-10-05 09:43:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:43:32 localhost podman[272407]: 2025-10-05 09:43:32.683780236 +0000 UTC m=+0.082844318 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:43:32 localhost podman[272407]: 2025-10-05 09:43:32.740760324 +0000 UTC m=+0.139824386 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:43:32 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:43:33 localhost sshd[272432]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:43:33 localhost systemd-logind[760]: New session 62 of user zuul. Oct 5 05:43:33 localhost systemd[1]: Started Session 62 of User zuul. Oct 5 05:43:34 localhost python3.9[272543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:43:35 localhost python3.9[272657]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:35 localhost nova_compute[238314]: 2025-10-05 09:43:35.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:35 localhost nova_compute[238314]: 2025-10-05 09:43:35.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:35 localhost nova_compute[238314]: 2025-10-05 09:43:35.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:43:35 localhost nova_compute[238314]: 2025-10-05 09:43:35.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:35 localhost nova_compute[238314]: 2025-10-05 09:43:35.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:35 localhost nova_compute[238314]: 2025-10-05 09:43:35.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:43:36 localhost podman[272768]: 2025-10-05 09:43:36.360721415 +0000 UTC m=+0.093235623 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 05:43:36 localhost podman[272768]: 2025-10-05 09:43:36.397860233 +0000 UTC m=+0.130374461 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 05:43:36 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:43:36 localhost python3.9[272767]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:37 localhost python3.9[272896]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:37 localhost nova_compute[238314]: 2025-10-05 09:43:37.073 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:37 localhost nova_compute[238314]: 2025-10-05 09:43:37.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:37 localhost nova_compute[238314]: 2025-10-05 09:43:37.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:43:37 localhost nova_compute[238314]: 2025-10-05 09:43:37.400 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:43:37 localhost nova_compute[238314]: 2025-10-05 09:43:37.401 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:37 localhost nova_compute[238314]: 2025-10-05 09:43:37.401 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:43:37 localhost python3.9[273006]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:43:38 localhost python3.9[273116]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:39 localhost nova_compute[238314]: 2025-10-05 09:43:39.422 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:39 localhost nova_compute[238314]: 2025-10-05 09:43:39.423 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:39 localhost nova_compute[238314]: 2025-10-05 09:43:39.423 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:43:39 localhost python3.9[273226]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:43:40 localhost nova_compute[238314]: 2025-10-05 09:43:40.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:40 localhost nova_compute[238314]: 2025-10-05 09:43:40.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:43:41 localhost systemd[1]: tmp-crun.99up4x.mount: Deactivated successfully. Oct 5 05:43:41 localhost nova_compute[238314]: 2025-10-05 09:43:41.372 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:41 localhost nova_compute[238314]: 2025-10-05 09:43:41.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:41 localhost podman[273338]: 2025-10-05 09:43:41.384268275 +0000 UTC m=+0.110735737 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter) Oct 5 05:43:41 localhost podman[273338]: 2025-10-05 09:43:41.398771464 +0000 UTC m=+0.125238906 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Oct 5 05:43:41 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:43:41 localhost python3.9[273339]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:43:42 localhost nova_compute[238314]: 2025-10-05 09:43:42.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:43:42 localhost podman[273360]: 2025-10-05 09:43:42.674299201 +0000 UTC m=+0.080886294 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:43:42 localhost podman[273360]: 2025-10-05 09:43:42.712856759 +0000 UTC m=+0.119443822 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:43:42 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23644 DF PROTO=TCP SPT=43038 DPT=9102 SEQ=2383888944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF741D0000000001030307) Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.398 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.398 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.399 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.399 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.400 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:43:43 localhost python3.9[273492]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:43:43 localhost network[273529]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:43:43 localhost network[273530]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:43:43 localhost network[273531]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.868 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.966 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:43:43 localhost nova_compute[238314]: 2025-10-05 09:43:43.966 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.164 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.167 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12054MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.167 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.168 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.392 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.394 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.394 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:43:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23645 DF PROTO=TCP SPT=43038 DPT=9102 SEQ=2383888944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF781E0000000001030307) Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.606 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.772 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.772 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.786 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.829 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_F16C,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:43:44 localhost nova_compute[238314]: 2025-10-05 09:43:44.861 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:43:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.349 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.356 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.381 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.383 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.384 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.385 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:43:45 localhost podman[273629]: 2025-10-05 09:43:45.831259204 +0000 UTC m=+0.075831781 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:43:45 localhost podman[273629]: 2025-10-05 09:43:45.839988371 +0000 UTC m=+0.084560938 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:43:45 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:43:45 localhost nova_compute[238314]: 2025-10-05 09:43:45.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23646 DF PROTO=TCP SPT=43038 DPT=9102 SEQ=2383888944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF801D0000000001030307) Oct 5 05:43:47 localhost nova_compute[238314]: 2025-10-05 09:43:47.424 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:47 localhost nova_compute[238314]: 2025-10-05 09:43:47.425 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:43:47 localhost nova_compute[238314]: 2025-10-05 09:43:47.425 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:43:48 localhost nova_compute[238314]: 2025-10-05 09:43:48.463 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:43:48 localhost nova_compute[238314]: 2025-10-05 09:43:48.463 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:43:48 localhost nova_compute[238314]: 2025-10-05 09:43:48.464 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:43:48 localhost nova_compute[238314]: 2025-10-05 09:43:48.464 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:43:48 localhost python3.9[273813]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:43:49 localhost python3.9[273923]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:43:50 localhost python3.9[274035]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23647 DF PROTO=TCP SPT=43038 DPT=9102 SEQ=2383888944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEF8FDD0000000001030307) Oct 5 05:43:50 localhost nova_compute[238314]: 2025-10-05 09:43:50.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:50 localhost nova_compute[238314]: 2025-10-05 09:43:50.915 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:43:50 localhost nova_compute[238314]: 2025-10-05 09:43:50.932 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:43:50 localhost nova_compute[238314]: 2025-10-05 09:43:50.933 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:43:51 localhost podman[274132]: 2025-10-05 09:43:51.058927778 +0000 UTC m=+0.074900174 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd) Oct 5 05:43:51 localhost podman[274132]: 2025-10-05 09:43:51.070753252 +0000 UTC m=+0.086725658 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd) Oct 5 05:43:51 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:43:51 localhost python3.9[274157]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:51 localhost podman[248506]: time="2025-10-05T09:43:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:43:51 localhost podman[248506]: @ - - [05/Oct/2025:09:43:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138319 "" "Go-http-client/1.1" Oct 5 05:43:51 localhost podman[248506]: @ - - [05/Oct/2025:09:43:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17335 "" "Go-http-client/1.1" Oct 5 05:43:52 localhost openstack_network_exporter[250601]: ERROR 09:43:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:43:52 localhost openstack_network_exporter[250601]: ERROR 09:43:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:52 localhost openstack_network_exporter[250601]: ERROR 09:43:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:52 localhost openstack_network_exporter[250601]: ERROR 09:43:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:43:52 localhost openstack_network_exporter[250601]: Oct 5 05:43:52 localhost openstack_network_exporter[250601]: ERROR 09:43:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:43:52 localhost openstack_network_exporter[250601]: Oct 5 05:43:52 localhost python3.9[274274]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:52 localhost python3.9[274331]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:54 localhost python3.9[274441]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:54 localhost python3.9[274498]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:55 localhost python3.9[274608]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:55 localhost nova_compute[238314]: 2025-10-05 09:43:55.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:43:56 localhost python3.9[274718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:56 localhost python3.9[274775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:57 localhost python3.9[274885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:57 localhost python3.9[274942]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:58 localhost python3.9[275052]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:43:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:43:58 localhost systemd[1]: Reloading. Oct 5 05:43:58 localhost podman[275054]: 2025-10-05 09:43:58.857686198 +0000 UTC m=+0.085701879 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 05:43:58 localhost podman[275054]: 2025-10-05 09:43:58.868969666 +0000 UTC m=+0.096985347 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 5 05:43:58 localhost systemd-rc-local-generator[275098]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:43:58 localhost systemd-sysv-generator[275101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:43:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:43:59 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:43:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:43:59 localhost podman[275109]: 2025-10-05 09:43:59.219466397 +0000 UTC m=+0.092819240 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:43:59 localhost podman[275109]: 2025-10-05 09:43:59.248540809 +0000 UTC m=+0.121893652 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:43:59 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:43:59 localhost python3.9[275237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:00 localhost python3.9[275294]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:00 localhost nova_compute[238314]: 2025-10-05 09:44:00.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:00 localhost nova_compute[238314]: 2025-10-05 09:44:00.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:01 localhost python3.9[275404]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:01 localhost python3.9[275461]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:02 localhost python3.9[275571]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:02 localhost systemd[1]: Reloading. Oct 5 05:44:02 localhost systemd-rc-local-generator[275598]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:44:02 localhost systemd-sysv-generator[275602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:44:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:44:02 localhost systemd[1]: Starting Create netns directory... Oct 5 05:44:02 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:44:02 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:44:02 localhost systemd[1]: Finished Create netns directory. Oct 5 05:44:02 localhost podman[275609]: 2025-10-05 09:44:02.889828211 +0000 UTC m=+0.095294810 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:44:02 localhost podman[275609]: 2025-10-05 09:44:02.934731658 +0000 UTC m=+0.140198277 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 05:44:02 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:44:03 localhost python3.9[275746]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:05 localhost python3.9[275856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:05 localhost python3.9[275913]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/iscsid/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/iscsid/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:05 localhost nova_compute[238314]: 2025-10-05 09:44:05.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:05 localhost nova_compute[238314]: 2025-10-05 09:44:05.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:44:06 localhost podman[275931]: 2025-10-05 09:44:06.675322243 +0000 UTC m=+0.081100430 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:44:06 localhost podman[275931]: 2025-10-05 09:44:06.686095977 +0000 UTC m=+0.091874144 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 05:44:06 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:44:07 localhost python3.9[276044]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:07 localhost python3.9[276154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:08 localhost python3.9[276211]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/iscsid.json _original_basename=.vdq6vctq recurse=False state=file path=/var/lib/kolla/config_files/iscsid.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:09 localhost python3.9[276321]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:10 localhost nova_compute[238314]: 2025-10-05 09:44:10.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:10 localhost nova_compute[238314]: 2025-10-05 09:44:10.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:10 localhost nova_compute[238314]: 2025-10-05 09:44:10.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:10 localhost nova_compute[238314]: 2025-10-05 09:44:10.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:10 localhost nova_compute[238314]: 2025-10-05 09:44:10.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:10 localhost nova_compute[238314]: 2025-10-05 09:44:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:11 localhost python3.9[276598]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False Oct 5 05:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:44:11 localhost podman[276616]: 2025-10-05 09:44:11.671522272 +0000 UTC m=+0.076951153 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:44:11 localhost podman[276616]: 2025-10-05 09:44:11.709269697 +0000 UTC m=+0.114698598 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1755695350, config_id=edpm, architecture=x86_64) Oct 5 05:44:11 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:44:12 localhost python3.9[276771]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:44:13 localhost podman[276906]: 2025-10-05 09:44:13.111095848 +0000 UTC m=+0.070085778 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:44:13 localhost podman[276906]: 2025-10-05 09:44:13.11823327 +0000 UTC m=+0.077223170 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:44:13 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:44:13 localhost python3.9[276905]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:44:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28211 DF PROTO=TCP SPT=51862 DPT=9102 SEQ=410963085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEFE94F0000000001030307) Oct 5 05:44:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28212 DF PROTO=TCP SPT=51862 DPT=9102 SEQ=410963085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEFED5D0000000001030307) Oct 5 05:44:15 localhost nova_compute[238314]: 2025-10-05 09:44:15.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:15 localhost nova_compute[238314]: 2025-10-05 09:44:15.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:15 localhost nova_compute[238314]: 2025-10-05 09:44:15.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:15 localhost nova_compute[238314]: 2025-10-05 09:44:15.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:15 localhost nova_compute[238314]: 2025-10-05 09:44:15.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:15 localhost nova_compute[238314]: 2025-10-05 09:44:15.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28213 DF PROTO=TCP SPT=51862 DPT=9102 SEQ=410963085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEEFF55D0000000001030307) Oct 5 05:44:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:44:16 localhost podman[276989]: 2025-10-05 09:44:16.678441124 +0000 UTC m=+0.083471737 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:44:16 localhost podman[276989]: 2025-10-05 09:44:16.689754223 +0000 UTC m=+0.094784876 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:44:16 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:44:17 localhost python3[277108]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:44:17 localhost python3[277108]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "777353c8928aa59ae2473c1d38acf1eefa9a0dfeca7b821fed936f9ff9383648",#012 "Digest": "sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:14:31.883735142Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 403870347,#012 "VirtualSize": 403870347,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:5517f28613540e56901977cf7926b9c77e610f33e0d02e83afbce9137bbc7d2a"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which Oct 5 05:44:18 localhost python3.9[277281]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:19 localhost python3.9[277393]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:19 localhost python3.9[277448]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:44:20.441 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:44:20.442 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:44:20.443 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28214 DF PROTO=TCP SPT=51862 DPT=9102 SEQ=410963085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF0051D0000000001030307) Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.769 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.797 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Triggering sync for uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.798 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.799 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.843 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:20 localhost python3.9[277557]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657460.0383832-986-266935795078035/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:20 localhost nova_compute[238314]: 2025-10-05 09:44:20.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:21 localhost nova_compute[238314]: 2025-10-05 09:44:21.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:21 localhost nova_compute[238314]: 2025-10-05 09:44:21.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:44:21 localhost podman[277613]: 2025-10-05 09:44:21.290570934 +0000 UTC m=+0.082721075 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 05:44:21 localhost podman[277613]: 2025-10-05 09:44:21.305744042 +0000 UTC m=+0.097894153 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 05:44:21 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:44:21 localhost podman[248506]: time="2025-10-05T09:44:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:44:21 localhost podman[248506]: @ - - [05/Oct/2025:09:44:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138319 "" "Go-http-client/1.1" Oct 5 05:44:21 localhost python3.9[277612]: ansible-systemd Invoked with state=started name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:21 localhost podman[248506]: @ - - [05/Oct/2025:09:44:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17336 "" "Go-http-client/1.1" Oct 5 05:44:22 localhost openstack_network_exporter[250601]: ERROR 09:44:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:44:22 localhost openstack_network_exporter[250601]: ERROR 09:44:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:22 localhost openstack_network_exporter[250601]: ERROR 09:44:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:22 localhost openstack_network_exporter[250601]: ERROR 09:44:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:44:22 localhost openstack_network_exporter[250601]: Oct 5 05:44:22 localhost openstack_network_exporter[250601]: ERROR 09:44:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:44:22 localhost openstack_network_exporter[250601]: Oct 5 05:44:22 localhost python3.9[277741]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:23 localhost python3.9[277853]: ansible-ansible.builtin.systemd Invoked with name=edpm_iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:44:23 localhost systemd[1]: Stopping iscsid container... Oct 5 05:44:23 localhost iscsid[217591]: iscsid shutting down. Oct 5 05:44:23 localhost systemd[1]: libpod-90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.scope: Deactivated successfully. Oct 5 05:44:23 localhost podman[277857]: 2025-10-05 09:44:23.177898927 +0000 UTC m=+0.082568292 container died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, tcib_managed=true) Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.timer: Deactivated successfully. Oct 5 05:44:23 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Failed to open /run/systemd/transient/90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: No such file or directory Oct 5 05:44:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90-userdata-shm.mount: Deactivated successfully. Oct 5 05:44:23 localhost podman[277857]: 2025-10-05 09:44:23.276690425 +0000 UTC m=+0.181359780 container cleanup 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:44:23 localhost podman[277857]: iscsid Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.timer: Failed to open /run/systemd/transient/90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.timer: No such file or directory Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Failed to open /run/systemd/transient/90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: No such file or directory Oct 5 05:44:23 localhost podman[277885]: 2025-10-05 09:44:23.378267021 +0000 UTC m=+0.067497665 container cleanup 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid) Oct 5 05:44:23 localhost podman[277885]: iscsid Oct 5 05:44:23 localhost systemd[1]: edpm_iscsid.service: Deactivated successfully. Oct 5 05:44:23 localhost systemd[1]: Stopped iscsid container. Oct 5 05:44:23 localhost systemd[1]: Starting iscsid container... Oct 5 05:44:23 localhost systemd[1]: Started libcrun container. Oct 5 05:44:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb490b85a5e30771de8f3f649ab3a4d7acfa68677b7ed446bed134ffb28c8fe/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 5 05:44:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb490b85a5e30771de8f3f649ab3a4d7acfa68677b7ed446bed134ffb28c8fe/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:44:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb490b85a5e30771de8f3f649ab3a4d7acfa68677b7ed446bed134ffb28c8fe/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.timer: Failed to open /run/systemd/transient/90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.timer: No such file or directory Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Failed to open /run/systemd/transient/90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: No such file or directory Oct 5 05:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:44:23 localhost podman[277898]: 2025-10-05 09:44:23.551637154 +0000 UTC m=+0.142493652 container init 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:44:23 localhost iscsid[277913]: + sudo -E kolla_set_configs Oct 5 05:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:44:23 localhost podman[277898]: 2025-10-05 09:44:23.595824301 +0000 UTC m=+0.186680759 container start 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:44:23 localhost podman[277898]: iscsid Oct 5 05:44:23 localhost systemd[1]: Started iscsid container. Oct 5 05:44:23 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 05:44:23 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 05:44:23 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 05:44:23 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 05:44:23 localhost podman[277921]: 2025-10-05 09:44:23.712913916 +0000 UTC m=+0.111457607 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:44:23 localhost podman[277921]: 2025-10-05 09:44:23.726334535 +0000 UTC m=+0.124878186 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:44:23 localhost podman[277921]: unhealthy Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:44:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Failed with result 'exit-code'. Oct 5 05:44:23 localhost systemd[277933]: Queued start job for default target Main User Target. Oct 5 05:44:23 localhost systemd[277933]: Created slice User Application Slice. Oct 5 05:44:23 localhost systemd[277933]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 05:44:23 localhost systemd[277933]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:44:23 localhost systemd[277933]: Reached target Paths. Oct 5 05:44:23 localhost systemd[277933]: Reached target Timers. Oct 5 05:44:23 localhost systemd[277933]: Starting D-Bus User Message Bus Socket... Oct 5 05:44:23 localhost systemd[277933]: Starting Create User's Volatile Files and Directories... Oct 5 05:44:23 localhost systemd[277933]: Listening on D-Bus User Message Bus Socket. Oct 5 05:44:23 localhost systemd[277933]: Reached target Sockets. Oct 5 05:44:23 localhost systemd[277933]: Finished Create User's Volatile Files and Directories. Oct 5 05:44:23 localhost systemd[277933]: Reached target Basic System. Oct 5 05:44:23 localhost systemd[277933]: Reached target Main User Target. Oct 5 05:44:23 localhost systemd[277933]: Startup finished in 144ms. Oct 5 05:44:23 localhost systemd[1]: Started User Manager for UID 0. Oct 5 05:44:23 localhost systemd[1]: Started Session c16 of User root. Oct 5 05:44:23 localhost iscsid[277913]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:44:23 localhost iscsid[277913]: INFO:__main__:Validating config file Oct 5 05:44:23 localhost iscsid[277913]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:44:23 localhost iscsid[277913]: INFO:__main__:Writing out command to execute Oct 5 05:44:23 localhost systemd[1]: session-c16.scope: Deactivated successfully. Oct 5 05:44:23 localhost iscsid[277913]: ++ cat /run_command Oct 5 05:44:23 localhost iscsid[277913]: + CMD='/usr/sbin/iscsid -f' Oct 5 05:44:23 localhost iscsid[277913]: + ARGS= Oct 5 05:44:23 localhost iscsid[277913]: + sudo kolla_copy_cacerts Oct 5 05:44:23 localhost systemd[1]: Started Session c17 of User root. Oct 5 05:44:24 localhost systemd[1]: session-c17.scope: Deactivated successfully. Oct 5 05:44:24 localhost iscsid[277913]: + [[ ! -n '' ]] Oct 5 05:44:24 localhost iscsid[277913]: + . kolla_extend_start Oct 5 05:44:24 localhost iscsid[277913]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]] Oct 5 05:44:24 localhost iscsid[277913]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\''' Oct 5 05:44:24 localhost iscsid[277913]: Running command: '/usr/sbin/iscsid -f' Oct 5 05:44:24 localhost iscsid[277913]: + umask 0022 Oct 5 05:44:24 localhost iscsid[277913]: + exec /usr/sbin/iscsid -f Oct 5 05:44:25 localhost python3.9[278071]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:26 localhost nova_compute[238314]: 2025-10-05 09:44:26.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:26 localhost nova_compute[238314]: 2025-10-05 09:44:26.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:26 localhost nova_compute[238314]: 2025-10-05 09:44:26.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:26 localhost nova_compute[238314]: 2025-10-05 09:44:26.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:26 localhost nova_compute[238314]: 2025-10-05 09:44:26.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:26 localhost nova_compute[238314]: 2025-10-05 09:44:26.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:26 localhost python3.9[278181]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:44:26 localhost network[278198]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:44:26 localhost network[278199]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:44:26 localhost network[278200]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:44:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:44:29 localhost podman[278292]: 2025-10-05 09:44:29.392243436 +0000 UTC m=+0.093706195 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:44:29 localhost podman[278292]: 2025-10-05 09:44:29.425926776 +0000 UTC m=+0.127389525 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:44:29 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:44:31 localhost nova_compute[238314]: 2025-10-05 09:44:31.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:31 localhost nova_compute[238314]: 2025-10-05 09:44:31.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:31 localhost nova_compute[238314]: 2025-10-05 09:44:31.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5041 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:31 localhost nova_compute[238314]: 2025-10-05 09:44:31.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:31 localhost nova_compute[238314]: 2025-10-05 09:44:31.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:31 localhost nova_compute[238314]: 2025-10-05 09:44:31.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:31 localhost python3.9[278452]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:44:32 localhost python3.9[278562]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Oct 5 05:44:32 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 5 05:44:32 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:44:32 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:44:32 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:44:32 localhost python3.9[278673]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:44:33 localhost podman[278731]: 2025-10-05 09:44:33.1016355 +0000 UTC m=+0.081138241 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:44:33 localhost podman[278731]: 2025-10-05 09:44:33.135768583 +0000 UTC m=+0.115271324 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:44:33 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:44:33 localhost python3.9[278730]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:33 localhost python3.9[278866]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:34 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:44:34 localhost systemd[277933]: Activating special unit Exit the Session... Oct 5 05:44:34 localhost systemd[277933]: Stopped target Main User Target. Oct 5 05:44:34 localhost systemd[277933]: Stopped target Basic System. Oct 5 05:44:34 localhost systemd[277933]: Stopped target Paths. Oct 5 05:44:34 localhost systemd[277933]: Stopped target Sockets. Oct 5 05:44:34 localhost systemd[277933]: Stopped target Timers. Oct 5 05:44:34 localhost systemd[277933]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:44:34 localhost systemd[277933]: Closed D-Bus User Message Bus Socket. Oct 5 05:44:34 localhost systemd[277933]: Stopped Create User's Volatile Files and Directories. Oct 5 05:44:34 localhost systemd[277933]: Removed slice User Application Slice. Oct 5 05:44:34 localhost systemd[277933]: Reached target Shutdown. Oct 5 05:44:34 localhost systemd[277933]: Finished Exit the Session. Oct 5 05:44:34 localhost systemd[277933]: Reached target Exit the Session. Oct 5 05:44:34 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:44:34 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:44:34 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:44:34 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:44:34 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:44:34 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:44:34 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:44:34 localhost python3.9[278977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:35 localhost python3.9[279087]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:36 localhost nova_compute[238314]: 2025-10-05 09:44:36.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:36 localhost nova_compute[238314]: 2025-10-05 09:44:36.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:36 localhost nova_compute[238314]: 2025-10-05 09:44:36.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:36 localhost nova_compute[238314]: 2025-10-05 09:44:36.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:36 localhost nova_compute[238314]: 2025-10-05 09:44:36.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:36 localhost nova_compute[238314]: 2025-10-05 09:44:36.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:36 localhost python3.9[279199]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:44:37 localhost podman[279257]: 2025-10-05 09:44:37.681635724 +0000 UTC m=+0.085750652 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:44:37 localhost podman[279257]: 2025-10-05 09:44:37.696450862 +0000 UTC m=+0.100565790 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:44:37 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:44:38 localhost python3.9[279330]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.832 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.833 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.853 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.854 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c3fa8125-3aea-40c7-afdd-e0d5ace92bf9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.833688', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e90a231c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.057612208, 'message_signature': 'fabfaa435e52a67ccb057f3513c864f236b631fd6031df7b6f10c9f15284ce6d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.833688', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e90a3c12-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.057612208, 'message_signature': 'aaef4b1503a0b3eaa8a7cb966ffee3dd5d9c34767152d794d14de34775c076c1'}]}, 'timestamp': '2025-10-05 09:44:38.855193', '_unique_id': '42f6b74eee1a44a69e8ee9fa0b539adc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.856 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.858 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.885 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.886 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '23fa8efc-06fe-45f2-af37-d12e0af7f4ea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.858353', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e90efb8a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '771a4d167c8bfa02613b1840399cb6ee3c9aad6f8a45ee146f7a53b49283446d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.858353', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e90f0eb8-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': 'bbf6ade64230433a748f90fd153e1564eaa0a4ec6d4d9026d3d003a398aab544'}]}, 'timestamp': '2025-10-05 09:44:38.886779', '_unique_id': 'd7f032b3a16642ae862b7fd44fbb6f02'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.887 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.889 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.889 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.889 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa89d695-3b11-4a55-a488-fa052c0fa766', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.889232', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e90f8104-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': 'bae97810afecdc9f69edf206d2efbadd4ccee13639ad7fbfc8047540869a660f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.889232', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e90f9194-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '68e5002416994753a6e95a5a326f4638e3350b9b06a686b04063ce162166fdba'}]}, 'timestamp': '2025-10-05 09:44:38.890126', '_unique_id': 'c92132b58caf41b99deb0675fb9582c2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.892 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 73912320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '90dabe13-2c79-4d38-b95a-5873fe7f70df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73912320, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.892336', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e90ffbd4-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': 'db2499eee1b7c64a3a414460b17d7e6e844f7f00c3690ed25d1fe691bf109bf4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.892336', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e9100e6c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '2171c1fe3c061058715d76117d3e5cd0c67a448a4341516424d9f2a6f72a00e4'}]}, 'timestamp': '2025-10-05 09:44:38.893327', '_unique_id': '80e37f9fbeed4c6aafa4c05eab447995'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.894 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.895 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.899 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '258eb864-dbd2-4ad0-9de3-adb38cadb36d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.895576', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e9112be4-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '4e6cfbcd766844c9e37368053fac9538f1021fae732633cd2db538094f36da71'}]}, 'timestamp': '2025-10-05 09:44:38.900767', '_unique_id': '07ca8786a513456792e8e3ce6d0ab5ae'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.903 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.903 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 9228 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b354c73b-233b-4322-a12a-d2c0f97d9b54', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9228, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.903795', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e911bb5e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '734f5f3819653a605031b522867508dfc59e60d74b590087f5664b664c874369'}]}, 'timestamp': '2025-10-05 09:44:38.904341', '_unique_id': '1c0b9537bb3640ccafaa252b7d9694df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.906 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '23580b94-c0af-4b9a-a944-eda26dd8ee88', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.906581', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e912253a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '2a3e8bc65237e32e2130da2654b63013cf4a98c937c51df137c58f33795a765f'}]}, 'timestamp': '2025-10-05 09:44:38.907043', '_unique_id': 'b4851761461448fb9707e20ebf851247'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.909 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.909 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1213559769 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.909 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 162365672 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cc93ec94-93fb-420d-b253-9c5af6c09860', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1213559769, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.909367', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e91293da-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': 'a5648ee94393768748888c09785b44ce6a8f023e88aa5d236e050494e974962f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 162365672, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.909367', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e912a42e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '864553472837d5a6265567402614fb901cf3bd6331003b17fbd5815726ea7a12'}]}, 'timestamp': '2025-10-05 09:44:38.910265', '_unique_id': '881d8dc6d1f14fc0b94855d3edcf2343'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.912 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.934 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 52.31640625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6561fc3-c0c2-4542-89b1-e9a14eafd185', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.31640625, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:44:38.912913', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'e9167932-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.158612249, 'message_signature': '072899fac922e0ea9039102f43586ba6a6dc8b25764bd33fe27bd8043677624f'}]}, 'timestamp': '2025-10-05 09:44:38.935462', '_unique_id': 'ff062d3de0a04d53a9fe2ecaa38c5dd1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.937 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 87 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b2c2233-c490-4cdc-83f5-008ebe97ba88', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 87, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.937930', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e916ee30-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '9f449796b9a73af0da496a8eed1642caa43a6289208cec33271f8012cceb2b42'}]}, 'timestamp': '2025-10-05 09:44:38.938430', '_unique_id': '4ffdcb80f1ff41aa913adb9b244bbca3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.940 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.940 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '459eebcc-a13c-478f-baa4-bbebe63201e9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.940732', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e9175b72-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.057612208, 'message_signature': 'a0bddcb332d506a87b4b1dd8e52eccb3f37bf0ac22b72a69bdbdc7e36916ea7f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.940732', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e91770ee-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.057612208, 'message_signature': '7949bb72fe6201ff9f359c7bcd069b240cc38e2b7db7b8e285c31368a9384b13'}]}, 'timestamp': '2025-10-05 09:44:38.941731', '_unique_id': 'f3d15abab0284ed1aecfc52108e1bb63'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '27822ecf-2bcb-4482-9c35-0f329cdbeb95', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.944178', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e917e4c0-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '62d2c0a2fbc659911ccb1ed33506b1d8d9c1e8c89b1db5a5a4ad131c911cfabd'}]}, 'timestamp': '2025-10-05 09:44:38.944776', '_unique_id': '84fdf7b7c9b74fce940cab7c2b3512e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.946 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 274779002 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 31348051 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b2faae64-e53a-4e18-b36f-be9cec4839f1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 274779002, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.947048', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e9185216-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '7bf90c5b5df61c8d497f6be8293422bb09d31d64e495a657becac979da0b8d3d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31348051, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.947048', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e9186486-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '4d132985ff267fefc3e8e7c681a3838550449ed2881459262e0c4f115e7e4fc6'}]}, 'timestamp': '2025-10-05 09:44:38.947963', '_unique_id': '8ea12ef75e3344418285d61391643991'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.950 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.950 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 58830000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9e02892-a16d-41e1-bd69-0f98bf7787a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 58830000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:44:38.950209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'e918cf70-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.158612249, 'message_signature': '18e2187f32d15718857178f38c526c8d13f08de2b6ee40c6e093252029551d59'}]}, 'timestamp': '2025-10-05 09:44:38.950730', '_unique_id': '98e326c4324e4e3b9f9fcd4541f599e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67e4c8c9-bc29-4e0f-b60a-bef37156fd63', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.953114', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e9193fc8-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '86e41a38d06f46cac32a8ca87d0f81acb0f6ae718adaf8ccab918c64130560b5'}]}, 'timestamp': '2025-10-05 09:44:38.953658', '_unique_id': '50b7bff3652046d4bedfb6e4a9562fdb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.955 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3ddd422f-8519-40da-b631-40838d882019', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.955997', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e919b1f6-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': 'a193a269702c14b04c8a9cb434137c624518f58b15b3dca52e9be78ff9a475a7'}]}, 'timestamp': '2025-10-05 09:44:38.956586', '_unique_id': 'a2d39a822b0045869bcf0145f8252bb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.958 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.958 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 446 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dca8b67d-d687-4149-8775-8feb38877f14', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 446, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.958832', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e91a1e3e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '9473631b011d722bbfd934ead015668e74fc59406b3847440180046a09ac1d6f'}]}, 'timestamp': '2025-10-05 09:44:38.959293', '_unique_id': '4421b95bb7ce4f159d3038f5fae77b34'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.960 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.961 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.961 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 574 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.962 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '054b3cb3-8ca4-44dc-8177-178f4797d9bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 574, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.961575', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e91a890a-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '510aa05b3093986d293c666d34528a634a728a12042094057e263574d2ca1164'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.961575', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e91a992c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.082303065, 'message_signature': '71dcffad44e9da4a30a0f44d0748411e9add3143ab8d477409beb479489b7cb9'}]}, 'timestamp': '2025-10-05 09:44:38.962442', '_unique_id': 'bf543ca29d59440192261e4cbae64ea2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.963 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.964 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.965 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.965 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4eed8bff-82b4-49d1-af5b-59f2b6af01f1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:44:38.964959', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e91b0d6c-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.057612208, 'message_signature': '84bff903a4c9bfb48fa5ac0196631bfd0149e270d9071f25de6ffa3aa746abf6'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:44:38.964959', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e91b1f28-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.057612208, 'message_signature': 'e013a6212bd31e9e3ed16f2506cc7e60fc4af041c7874c924199716db3240bd3'}]}, 'timestamp': '2025-10-05 09:44:38.965839', '_unique_id': 'd0011dcdae2a43808a11d495588afccb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.966 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.967 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.968 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '323687fe-72d5-4a02-a224-54469a6a50c7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.968070', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e91b872e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '14f62fa52a0d0ae4a9e58767b1db3a0d7a140b8b5bae616d14ad2551d5d4066d'}]}, 'timestamp': '2025-10-05 09:44:38.968565', '_unique_id': 'd4319d469a3c43ce8c573c4762aaee2c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.969 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.970 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.970 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43d23021-a2c1-4365-989e-eba05380c8b7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:44:38.970329', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'e91bdd1e-a1cf-11f0-9396-fa163ec6f33d', 'monotonic_time': 10958.119490514, 'message_signature': '302fe4d7fcf91ad2c01d3570c85b562670a12cd123b76b2bdfeb4e838fe5b543'}]}, 'timestamp': '2025-10-05 09:44:38.970645', '_unique_id': 'c519110218ee404090833ee8d8ca00a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 ERROR oslo_messaging.notify.messaging Oct 5 05:44:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:44:38.971 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:39 localhost python3.9[279441]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:40 localhost python3.9[279551]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:40 localhost nova_compute[238314]: 2025-10-05 09:44:40.407 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:40 localhost python3.9[279661]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238314]: 2025-10-05 09:44:41.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:44:41 localhost python3.9[279771]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:44:42 localhost podman[279881]: 2025-10-05 09:44:42.142593289 +0000 UTC m=+0.086560464 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.) Oct 5 05:44:42 localhost podman[279881]: 2025-10-05 09:44:42.158814476 +0000 UTC m=+0.102781701 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc.) Oct 5 05:44:42 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:44:42 localhost python3.9[279882]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:42 localhost nova_compute[238314]: 2025-10-05 09:44:42.373 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:42 localhost nova_compute[238314]: 2025-10-05 09:44:42.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:42 localhost python3.9[280011]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:43 localhost nova_compute[238314]: 2025-10-05 09:44:43.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:43 localhost nova_compute[238314]: 2025-10-05 09:44:43.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27362 DF PROTO=TCP SPT=59578 DPT=9102 SEQ=3746274821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF05E7D0000000001030307) Oct 5 05:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:44:43 localhost podman[280063]: 2025-10-05 09:44:43.675716405 +0000 UTC m=+0.083484947 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:44:43 localhost podman[280063]: 2025-10-05 09:44:43.712254616 +0000 UTC m=+0.120023078 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:44:43 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:44:44 localhost python3.9[280146]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27363 DF PROTO=TCP SPT=59578 DPT=9102 SEQ=3746274821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF0629D0000000001030307) Oct 5 05:44:44 localhost python3.9[280256]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:45 localhost python3.9[280313]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.398 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.399 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.399 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.400 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.401 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.859 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:44:45 localhost python3.9[280443]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.937 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:44:45 localhost nova_compute[238314]: 2025-10-05 09:44:45.937 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.090 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.092 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12055MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.092 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.092 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.175 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.175 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.175 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.225 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:44:46 localhost python3.9[280502]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27364 DF PROTO=TCP SPT=59578 DPT=9102 SEQ=3746274821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF06A9D0000000001030307) Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.715 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.723 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.744 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.746 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:44:46 localhost nova_compute[238314]: 2025-10-05 09:44:46.747 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:44:47 localhost podman[280634]: 2025-10-05 09:44:47.004000534 +0000 UTC m=+0.092629335 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:44:47 localhost podman[280634]: 2025-10-05 09:44:47.014755147 +0000 UTC m=+0.103383928 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:44:47 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:44:47 localhost python3.9[280640]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:47 localhost python3.9[280767]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:48 localhost python3.9[280824]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:48 localhost nova_compute[238314]: 2025-10-05 09:44:48.747 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:48 localhost nova_compute[238314]: 2025-10-05 09:44:48.748 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:44:48 localhost nova_compute[238314]: 2025-10-05 09:44:48.748 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.536 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.536 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.536 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.537 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:44:49 localhost python3.9[280934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.940 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.959 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:44:49 localhost nova_compute[238314]: 2025-10-05 09:44:49.959 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:44:50 localhost python3.9[280991]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27365 DF PROTO=TCP SPT=59578 DPT=9102 SEQ=3746274821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF07A5D0000000001030307) Oct 5 05:44:51 localhost nova_compute[238314]: 2025-10-05 09:44:51.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:51 localhost nova_compute[238314]: 2025-10-05 09:44:51.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:51 localhost nova_compute[238314]: 2025-10-05 09:44:51.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5038 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:51 localhost nova_compute[238314]: 2025-10-05 09:44:51.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:51 localhost nova_compute[238314]: 2025-10-05 09:44:51.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:51 localhost nova_compute[238314]: 2025-10-05 09:44:51.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:51 localhost podman[248506]: time="2025-10-05T09:44:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:44:51 localhost podman[248506]: @ - - [05/Oct/2025:09:44:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138318 "" "Go-http-client/1.1" Oct 5 05:44:51 localhost podman[248506]: @ - - [05/Oct/2025:09:44:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17340 "" "Go-http-client/1.1" Oct 5 05:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:44:51 localhost podman[281102]: 2025-10-05 09:44:51.651668697 +0000 UTC m=+0.083424685 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:44:51 localhost podman[281102]: 2025-10-05 09:44:51.694844005 +0000 UTC m=+0.126600033 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd) Oct 5 05:44:51 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:44:51 localhost python3.9[281101]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:51 localhost systemd[1]: Reloading. Oct 5 05:44:52 localhost systemd-sysv-generator[281145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:44:52 localhost systemd-rc-local-generator[281142]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:44:52 localhost openstack_network_exporter[250601]: ERROR 09:44:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:52 localhost openstack_network_exporter[250601]: ERROR 09:44:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:52 localhost openstack_network_exporter[250601]: ERROR 09:44:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:44:52 localhost openstack_network_exporter[250601]: ERROR 09:44:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:44:52 localhost openstack_network_exporter[250601]: Oct 5 05:44:52 localhost openstack_network_exporter[250601]: ERROR 09:44:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:44:52 localhost openstack_network_exporter[250601]: Oct 5 05:44:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:44:54 localhost systemd[1]: tmp-crun.9K9cg6.mount: Deactivated successfully. Oct 5 05:44:54 localhost podman[281266]: 2025-10-05 09:44:54.188634424 +0000 UTC m=+0.090937348 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:44:54 localhost podman[281266]: 2025-10-05 09:44:54.201871937 +0000 UTC m=+0.104174861 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid) Oct 5 05:44:54 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:44:54 localhost python3.9[281267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:54 localhost python3.9[281342]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:55 localhost python3.9[281452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:55 localhost python3.9[281509]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:56 localhost nova_compute[238314]: 2025-10-05 09:44:56.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:56 localhost nova_compute[238314]: 2025-10-05 09:44:56.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:44:56 localhost nova_compute[238314]: 2025-10-05 09:44:56.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:44:56 localhost nova_compute[238314]: 2025-10-05 09:44:56.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:56 localhost nova_compute[238314]: 2025-10-05 09:44:56.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:44:56 localhost nova_compute[238314]: 2025-10-05 09:44:56.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:44:56 localhost python3.9[281619]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:56 localhost systemd[1]: Reloading. Oct 5 05:44:56 localhost systemd-rc-local-generator[281646]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:44:56 localhost systemd-sysv-generator[281650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:44:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:57 localhost systemd[1]: Starting Create netns directory... Oct 5 05:44:57 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:44:57 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:44:57 localhost systemd[1]: Finished Create netns directory. Oct 5 05:44:58 localhost python3.9[281771]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:58 localhost python3.9[281881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:59 localhost python3.9[281938]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:44:59 localhost podman[281956]: 2025-10-05 09:44:59.683942229 +0000 UTC m=+0.083387475 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:44:59 localhost podman[281956]: 2025-10-05 09:44:59.714988965 +0000 UTC m=+0.114434181 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:44:59 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:45:00 localhost python3.9[282066]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:45:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 701 syncs, 7.43 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:45:01 localhost nova_compute[238314]: 2025-10-05 09:45:01.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:01 localhost nova_compute[238314]: 2025-10-05 09:45:01.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:01 localhost python3.9[282176]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:45:01 localhost python3.9[282233]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.2nrr5zxb recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:02 localhost python3.9[282343]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:45:03 localhost systemd[1]: tmp-crun.LXUTii.mount: Deactivated successfully. Oct 5 05:45:03 localhost podman[282510]: 2025-10-05 09:45:03.687405602 +0000 UTC m=+0.092547802 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:45:03 localhost podman[282510]: 2025-10-05 09:45:03.785933613 +0000 UTC m=+0.191075783 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:45:03 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:45:04 localhost python3.9[282644]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Oct 5 05:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:45:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5475 writes, 735 syncs, 7.45 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 20 writes, 40 keys, 20 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s#012Interval WAL: 20 writes, 10 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:45:05 localhost python3.9[282754]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:45:06 localhost nova_compute[238314]: 2025-10-05 09:45:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:06 localhost nova_compute[238314]: 2025-10-05 09:45:06.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:06 localhost python3.9[282864]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:45:08 localhost podman[282908]: 2025-10-05 09:45:08.673569187 +0000 UTC m=+0.085504704 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 05:45:08 localhost podman[282908]: 2025-10-05 09:45:08.684150296 +0000 UTC m=+0.096085813 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:45:08 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:45:10 localhost python3[283019]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:45:11 localhost python3[283019]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "cfea91e4d3d24ea2b93aee805b1650aeb46d9546bcbf0bc2c512e1c027bd6148",#012 "Digest": "sha256:2e6b33858f10c5161efa5026fe197bed1871f616a88492deb2d9589afe55f306",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:2e6b33858f10c5161efa5026fe197bed1871f616a88492deb2d9589afe55f306"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:10:07.809982095Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249392108,#012 "VirtualSize": 249392108,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7b5d9698f5e241817bc1ab20fc93517a066d97944c963cb3e8954ea8e4465d09/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7b5d9698f5e241817bc1ab20fc93517a066d97944c963cb3e8954ea8e4465d09/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:fd97c8266967784f89c19eba886aaa5428c66f61d661ea8625c291c6fd888856"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:06.203051718Z",#012 Oct 5 05:45:11 localhost nova_compute[238314]: 2025-10-05 09:45:11.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:11 localhost python3.9[283193]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:45:12 localhost systemd[1]: tmp-crun.TA1ttm.mount: Deactivated successfully. Oct 5 05:45:12 localhost podman[283285]: 2025-10-05 09:45:12.700648087 +0000 UTC m=+0.095540377 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Oct 5 05:45:12 localhost podman[283285]: 2025-10-05 09:45:12.716810903 +0000 UTC m=+0.111703193 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Oct 5 05:45:12 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:45:12 localhost python3.9[283317]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:13 localhost python3.9[283381]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:45:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53127 DF PROTO=TCP SPT=48892 DPT=9102 SEQ=3431141237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF0D3AD0000000001030307) Oct 5 05:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:45:13 localhost podman[283491]: 2025-10-05 09:45:13.903274867 +0000 UTC m=+0.077448487 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:45:13 localhost podman[283491]: 2025-10-05 09:45:13.914661518 +0000 UTC m=+0.088835138 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:45:13 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:45:14 localhost python3.9[283490]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657513.36115-2189-97636489236484/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53128 DF PROTO=TCP SPT=48892 DPT=9102 SEQ=3431141237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF0D79E0000000001030307) Oct 5 05:45:14 localhost python3.9[283568]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:15 localhost python3.9[283678]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:45:16 localhost python3.9[283788]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:16 localhost nova_compute[238314]: 2025-10-05 09:45:16.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:45:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53129 DF PROTO=TCP SPT=48892 DPT=9102 SEQ=3431141237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF0DF9E0000000001030307) Oct 5 05:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:45:17 localhost podman[283935]: 2025-10-05 09:45:17.26961133 +0000 UTC m=+0.086500732 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:45:17 localhost podman[283935]: 2025-10-05 09:45:17.299481282 +0000 UTC m=+0.116370634 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:45:17 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:45:17 localhost python3.9[283934]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:45:18 localhost python3.9[284134]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Oct 5 05:45:19 localhost python3.9[284262]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:45:20 localhost python3.9[284319]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:45:20.443 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:45:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:45:20.444 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:45:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:45:20.445 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:45:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53130 DF PROTO=TCP SPT=48892 DPT=9102 SEQ=3431141237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF0EF5D0000000001030307) Oct 5 05:45:21 localhost python3.9[284429]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:21 localhost nova_compute[238314]: 2025-10-05 09:45:21.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:21 localhost podman[248506]: time="2025-10-05T09:45:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:45:21 localhost podman[248506]: @ - - [05/Oct/2025:09:45:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138317 "" "Go-http-client/1.1" Oct 5 05:45:21 localhost podman[248506]: @ - - [05/Oct/2025:09:45:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17337 "" "Go-http-client/1.1" Oct 5 05:45:21 localhost python3.9[284557]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:45:22 localhost openstack_network_exporter[250601]: ERROR 09:45:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:22 localhost openstack_network_exporter[250601]: ERROR 09:45:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:45:22 localhost openstack_network_exporter[250601]: ERROR 09:45:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:22 localhost openstack_network_exporter[250601]: ERROR 09:45:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:45:22 localhost openstack_network_exporter[250601]: Oct 5 05:45:22 localhost openstack_network_exporter[250601]: ERROR 09:45:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:45:22 localhost openstack_network_exporter[250601]: Oct 5 05:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:45:22 localhost podman[284582]: 2025-10-05 09:45:22.699330185 +0000 UTC m=+0.096445284 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:45:22 localhost podman[284582]: 2025-10-05 09:45:22.741792033 +0000 UTC m=+0.138907082 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd) Oct 5 05:45:22 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:45:22 localhost python3.9[284639]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:45:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:45:24 localhost podman[284642]: 2025-10-05 09:45:24.667751196 +0000 UTC m=+0.079872835 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:45:24 localhost podman[284642]: 2025-10-05 09:45:24.681171944 +0000 UTC m=+0.093293573 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 05:45:24 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:45:26 localhost nova_compute[238314]: 2025-10-05 09:45:26.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:45:26 localhost python3.9[284769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:45:27 localhost python3.9[284883]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:29 localhost python3.9[284993]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:45:29 localhost systemd[1]: Reloading. Oct 5 05:45:29 localhost systemd-rc-local-generator[285022]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:45:29 localhost systemd-sysv-generator[285025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:45:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:45:29 localhost podman[285031]: 2025-10-05 09:45:29.858072484 +0000 UTC m=+0.082357234 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:45:29 localhost podman[285031]: 2025-10-05 09:45:29.864457198 +0000 UTC m=+0.088741978 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true) Oct 5 05:45:29 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:45:30 localhost python3.9[285156]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:45:30 localhost network[285173]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:45:30 localhost network[285174]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:45:30 localhost network[285175]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:45:31 localhost nova_compute[238314]: 2025-10-05 09:45:31.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:45:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:45:33 localhost podman[285278]: 2025-10-05 09:45:33.926961632 +0000 UTC m=+0.094778333 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 05:45:33 localhost systemd[1]: tmp-crun.MO4oea.mount: Deactivated successfully. Oct 5 05:45:34 localhost podman[285278]: 2025-10-05 09:45:34.041914783 +0000 UTC m=+0.209731484 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:45:34 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:45:35 localhost python3.9[285436]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:36 localhost python3.9[285547]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:36 localhost nova_compute[238314]: 2025-10-05 09:45:36.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:37 localhost python3.9[285658]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:37 localhost python3.9[285769]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:45:39 localhost systemd[1]: tmp-crun.bBQpIV.mount: Deactivated successfully. Oct 5 05:45:39 localhost podman[285881]: 2025-10-05 09:45:39.398198935 +0000 UTC m=+0.091722430 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:45:39 localhost podman[285881]: 2025-10-05 09:45:39.411779685 +0000 UTC m=+0.105303160 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:45:39 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:45:39 localhost nova_compute[238314]: 2025-10-05 09:45:39.585 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:39 localhost python3.9[285880]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:41 localhost nova_compute[238314]: 2025-10-05 09:45:41.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:42 localhost python3.9[286010]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:42 localhost nova_compute[238314]: 2025-10-05 09:45:42.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:42 localhost nova_compute[238314]: 2025-10-05 09:45:42.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:42 localhost nova_compute[238314]: 2025-10-05 09:45:42.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:45:43 localhost nova_compute[238314]: 2025-10-05 09:45:43.378 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:43 localhost nova_compute[238314]: 2025-10-05 09:45:43.379 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59415 DF PROTO=TCP SPT=56552 DPT=9102 SEQ=280793316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF148DE0000000001030307) Oct 5 05:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:45:43 localhost podman[286045]: 2025-10-05 09:45:43.677214166 +0000 UTC m=+0.085175541 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:45:43 localhost podman[286045]: 2025-10-05 09:45:43.693821688 +0000 UTC m=+0.101783073 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, version=9.6, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container) Oct 5 05:45:43 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:45:44 localhost python3.9[286141]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:45:44 localhost podman[286143]: 2025-10-05 09:45:44.275292156 +0000 UTC m=+0.080562665 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:45:44 localhost podman[286143]: 2025-10-05 09:45:44.287021286 +0000 UTC m=+0.092291835 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:45:44 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:45:44 localhost nova_compute[238314]: 2025-10-05 09:45:44.373 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59416 DF PROTO=TCP SPT=56552 DPT=9102 SEQ=280793316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF14CDD0000000001030307) Oct 5 05:45:45 localhost nova_compute[238314]: 2025-10-05 09:45:45.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:45 localhost nova_compute[238314]: 2025-10-05 09:45:45.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:45 localhost python3.9[286276]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.415 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.415 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.416 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.416 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.417 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:45:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59417 DF PROTO=TCP SPT=56552 DPT=9102 SEQ=280793316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF154DD0000000001030307) Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.896 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.968 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:45:46 localhost nova_compute[238314]: 2025-10-05 09:45:46.968 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:45:46 localhost python3.9[286407]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.169 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.171 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12070MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.171 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.172 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.283 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.284 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.285 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.317 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:45:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:45:47 localhost podman[286521]: 2025-10-05 09:45:47.525852364 +0000 UTC m=+0.107521299 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:45:47 localhost podman[286521]: 2025-10-05 09:45:47.557877147 +0000 UTC m=+0.139546052 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:45:47 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:45:47 localhost python3.9[286520]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.795 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.804 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.820 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.823 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:45:47 localhost nova_compute[238314]: 2025-10-05 09:45:47.823 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:45:48 localhost python3.9[286672]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:48 localhost python3.9[286782]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:49 localhost python3.9[286892]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.824 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.825 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.825 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.878 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.878 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.879 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:45:49 localhost nova_compute[238314]: 2025-10-05 09:45:49.879 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:45:50 localhost python3.9[287002]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:50 localhost nova_compute[238314]: 2025-10-05 09:45:50.179 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:45:50 localhost nova_compute[238314]: 2025-10-05 09:45:50.200 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:45:50 localhost nova_compute[238314]: 2025-10-05 09:45:50.200 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:45:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59418 DF PROTO=TCP SPT=56552 DPT=9102 SEQ=280793316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF1649D0000000001030307) Oct 5 05:45:50 localhost python3.9[287112]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:51 localhost python3.9[287222]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:51 localhost nova_compute[238314]: 2025-10-05 09:45:51.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:51 localhost podman[248506]: time="2025-10-05T09:45:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:45:51 localhost podman[248506]: @ - - [05/Oct/2025:09:45:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138317 "" "Go-http-client/1.1" Oct 5 05:45:51 localhost podman[248506]: @ - - [05/Oct/2025:09:45:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17337 "" "Go-http-client/1.1" Oct 5 05:45:52 localhost openstack_network_exporter[250601]: ERROR 09:45:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:52 localhost openstack_network_exporter[250601]: ERROR 09:45:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:45:52 localhost openstack_network_exporter[250601]: ERROR 09:45:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:52 localhost openstack_network_exporter[250601]: ERROR 09:45:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:45:52 localhost openstack_network_exporter[250601]: Oct 5 05:45:52 localhost openstack_network_exporter[250601]: ERROR 09:45:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:45:52 localhost openstack_network_exporter[250601]: Oct 5 05:45:52 localhost python3.9[287332]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:52 localhost python3.9[287442]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:45:53 localhost podman[287553]: 2025-10-05 09:45:53.254262734 +0000 UTC m=+0.076064683 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd) Oct 5 05:45:53 localhost podman[287553]: 2025-10-05 09:45:53.265742996 +0000 UTC m=+0.087544935 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:45:53 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:45:53 localhost python3.9[287552]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:53 localhost python3.9[287679]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:54 localhost python3.9[287789]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:45:55 localhost podman[287900]: 2025-10-05 09:45:55.076278971 +0000 UTC m=+0.088059739 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:45:55 localhost podman[287900]: 2025-10-05 09:45:55.086438508 +0000 UTC m=+0.098219256 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 05:45:55 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:45:55 localhost python3.9[287899]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:55 localhost python3.9[288028]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:56 localhost nova_compute[238314]: 2025-10-05 09:45:56.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:45:56 localhost python3.9[288138]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:57 localhost python3.9[288248]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:45:58 localhost python3.9[288358]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:45:58 localhost python3.9[288468]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:45:58 localhost systemd[1]: Reloading. Oct 5 05:45:59 localhost systemd-rc-local-generator[288489]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:45:59 localhost systemd-sysv-generator[288493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:45:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:46:00 localhost podman[288614]: 2025-10-05 09:46:00.082690935 +0000 UTC m=+0.087826483 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:46:00 localhost podman[288614]: 2025-10-05 09:46:00.09274686 +0000 UTC m=+0.097882368 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:00 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:46:00 localhost python3.9[288613]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:00 localhost python3.9[288742]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:01 localhost nova_compute[238314]: 2025-10-05 09:46:01.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:02 localhost python3.9[288853]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:02 localhost python3.9[288964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:03 localhost python3.9[289075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:04 localhost python3.9[289186]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:46:04 localhost podman[289298]: 2025-10-05 09:46:04.54649489 +0000 UTC m=+0.076654699 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:46:04 localhost podman[289298]: 2025-10-05 09:46:04.626483548 +0000 UTC m=+0.156643357 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:04 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:46:04 localhost python3.9[289297]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:05 localhost python3.9[289433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:06 localhost nova_compute[238314]: 2025-10-05 09:46:06.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:08 localhost python3.9[289544]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:08 localhost python3.9[289654]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:09 localhost python3.9[289764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:46:09 localhost systemd[1]: tmp-crun.0Fp5Ev.mount: Deactivated successfully. Oct 5 05:46:09 localhost podman[289782]: 2025-10-05 09:46:09.677937511 +0000 UTC m=+0.088158223 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:46:09 localhost podman[289782]: 2025-10-05 09:46:09.718800274 +0000 UTC m=+0.129020936 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:46:09 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:46:10 localhost python3.9[289892]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:10 localhost python3.9[290002]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:11 localhost nova_compute[238314]: 2025-10-05 09:46:11.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:11 localhost python3.9[290112]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:12 localhost python3.9[290222]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:12 localhost python3.9[290332]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27771 DF PROTO=TCP SPT=60452 DPT=9102 SEQ=3559406925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF1BE0E0000000001030307) Oct 5 05:46:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:46:13 localhost podman[290443]: 2025-10-05 09:46:13.921176636 +0000 UTC m=+0.076861584 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible) Oct 5 05:46:13 localhost podman[290443]: 2025-10-05 09:46:13.961884576 +0000 UTC m=+0.117569494 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:46:13 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:46:14 localhost python3.9[290442]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27772 DF PROTO=TCP SPT=60452 DPT=9102 SEQ=3559406925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF1C21D0000000001030307) Oct 5 05:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:46:14 localhost systemd[1]: tmp-crun.mRrDEg.mount: Deactivated successfully. Oct 5 05:46:14 localhost podman[290573]: 2025-10-05 09:46:14.58509841 +0000 UTC m=+0.089979202 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:46:14 localhost podman[290573]: 2025-10-05 09:46:14.596707506 +0000 UTC m=+0.101588308 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:46:14 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:46:14 localhost python3.9[290572]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:15 localhost python3.9[290705]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:16 localhost nova_compute[238314]: 2025-10-05 09:46:16.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:16 localhost nova_compute[238314]: 2025-10-05 09:46:16.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:16 localhost nova_compute[238314]: 2025-10-05 09:46:16.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:46:16 localhost nova_compute[238314]: 2025-10-05 09:46:16.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:16 localhost nova_compute[238314]: 2025-10-05 09:46:16.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:16 localhost nova_compute[238314]: 2025-10-05 09:46:16.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27773 DF PROTO=TCP SPT=60452 DPT=9102 SEQ=3559406925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF1CA1D0000000001030307) Oct 5 05:46:16 localhost python3.9[290815]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:46:18 localhost podman[290833]: 2025-10-05 09:46:18.670038864 +0000 UTC m=+0.077974955 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:46:18 localhost podman[290833]: 2025-10-05 09:46:18.678501555 +0000 UTC m=+0.086437636 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:46:18 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:46:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:46:20.444 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:46:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:46:20.445 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:46:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:46:20.446 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:46:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27774 DF PROTO=TCP SPT=60452 DPT=9102 SEQ=3559406925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF1D9DD0000000001030307) Oct 5 05:46:21 localhost nova_compute[238314]: 2025-10-05 09:46:21.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:21 localhost podman[248506]: time="2025-10-05T09:46:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:46:21 localhost podman[248506]: @ - - [05/Oct/2025:09:46:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138317 "" "Go-http-client/1.1" Oct 5 05:46:21 localhost podman[248506]: @ - - [05/Oct/2025:09:46:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17347 "" "Go-http-client/1.1" Oct 5 05:46:22 localhost openstack_network_exporter[250601]: ERROR 09:46:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:22 localhost openstack_network_exporter[250601]: ERROR 09:46:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:22 localhost openstack_network_exporter[250601]: ERROR 09:46:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:46:22 localhost openstack_network_exporter[250601]: ERROR 09:46:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:46:22 localhost openstack_network_exporter[250601]: Oct 5 05:46:22 localhost openstack_network_exporter[250601]: ERROR 09:46:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:46:22 localhost openstack_network_exporter[250601]: Oct 5 05:46:22 localhost python3.9[291003]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Oct 5 05:46:23 localhost podman[291113]: Oct 5 05:46:23 localhost podman[291113]: 2025-10-05 09:46:23.456533357 +0000 UTC m=+0.076776012 container create 709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_noether, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, distribution-scope=public, RELEASE=main, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12) Oct 5 05:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:46:23 localhost systemd[1]: Started libpod-conmon-709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330.scope. Oct 5 05:46:23 localhost systemd[1]: Started libcrun container. Oct 5 05:46:23 localhost podman[291113]: 2025-10-05 09:46:23.520570791 +0000 UTC m=+0.140813436 container init 709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_noether, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, RELEASE=main, vcs-type=git, release=553, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:46:23 localhost podman[291113]: 2025-10-05 09:46:23.423762624 +0000 UTC m=+0.044005299 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:46:23 localhost systemd[1]: tmp-crun.7Bm5vg.mount: Deactivated successfully. Oct 5 05:46:23 localhost podman[291113]: 2025-10-05 09:46:23.539089236 +0000 UTC m=+0.159331901 container start 709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_noether, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, distribution-scope=public) Oct 5 05:46:23 localhost podman[291113]: 2025-10-05 09:46:23.539445885 +0000 UTC m=+0.159688570 container attach 709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_noether, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7, release=553, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., RELEASE=main) Oct 5 05:46:23 localhost funny_noether[291133]: 167 167 Oct 5 05:46:23 localhost systemd[1]: libpod-709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330.scope: Deactivated successfully. Oct 5 05:46:23 localhost podman[291113]: 2025-10-05 09:46:23.543636499 +0000 UTC m=+0.163879184 container died 709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_noether, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_BRANCH=main) Oct 5 05:46:23 localhost podman[291128]: 2025-10-05 09:46:23.637579158 +0000 UTC m=+0.148105475 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Oct 5 05:46:23 localhost podman[291128]: 2025-10-05 09:46:23.651379384 +0000 UTC m=+0.161905741 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:46:23 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:46:23 localhost podman[291145]: 2025-10-05 09:46:23.754816972 +0000 UTC m=+0.196960077 container remove 709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_noether, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:46:23 localhost systemd[1]: libpod-conmon-709c71c43900dfb732f0c424da3fa6192721e77f7322378800fc1f7c637b6330.scope: Deactivated successfully. Oct 5 05:46:23 localhost sshd[291186]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:46:23 localhost podman[291173]: Oct 5 05:46:23 localhost podman[291173]: 2025-10-05 09:46:23.98165978 +0000 UTC m=+0.067099118 container create 378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:46:24 localhost systemd[1]: Started libpod-conmon-378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2.scope. Oct 5 05:46:24 localhost systemd[1]: Started libcrun container. Oct 5 05:46:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88389b11288a6ef589905028de48f3b363c80b4dc651919dab733cc5b39959b4/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:24 localhost podman[291173]: 2025-10-05 09:46:23.948300872 +0000 UTC m=+0.033740260 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:46:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88389b11288a6ef589905028de48f3b363c80b4dc651919dab733cc5b39959b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88389b11288a6ef589905028de48f3b363c80b4dc651919dab733cc5b39959b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88389b11288a6ef589905028de48f3b363c80b4dc651919dab733cc5b39959b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:24 localhost podman[291173]: 2025-10-05 09:46:24.054248088 +0000 UTC m=+0.139687456 container init 378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12) Oct 5 05:46:24 localhost podman[291173]: 2025-10-05 09:46:24.066557823 +0000 UTC m=+0.151997151 container start 378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, version=7, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:46:24 localhost podman[291173]: 2025-10-05 09:46:24.066857481 +0000 UTC m=+0.152296859 container attach 378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, GIT_CLEAN=True, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public) Oct 5 05:46:24 localhost systemd-logind[760]: New session 64 of user zuul. Oct 5 05:46:24 localhost systemd[1]: Started Session 64 of User zuul. Oct 5 05:46:24 localhost systemd[1]: session-64.scope: Deactivated successfully. Oct 5 05:46:24 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Oct 5 05:46:24 localhost systemd-logind[760]: Removed session 64. Oct 5 05:46:24 localhost systemd[1]: var-lib-containers-storage-overlay-f9b8fa773b15baca4ebc5bafc1ef3e85c84790dd3220cbfab20d9664626bb744-merged.mount: Deactivated successfully. Oct 5 05:46:24 localhost python3.9[291525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:24 localhost admiring_dirac[291190]: [ Oct 5 05:46:24 localhost admiring_dirac[291190]: { Oct 5 05:46:24 localhost admiring_dirac[291190]: "available": false, Oct 5 05:46:24 localhost admiring_dirac[291190]: "ceph_device": false, Oct 5 05:46:24 localhost admiring_dirac[291190]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 05:46:24 localhost admiring_dirac[291190]: "lsm_data": {}, Oct 5 05:46:24 localhost admiring_dirac[291190]: "lvs": [], Oct 5 05:46:24 localhost admiring_dirac[291190]: "path": "/dev/sr0", Oct 5 05:46:24 localhost admiring_dirac[291190]: "rejected_reasons": [ Oct 5 05:46:24 localhost admiring_dirac[291190]: "Has a FileSystem", Oct 5 05:46:24 localhost admiring_dirac[291190]: "Insufficient space (<5GB)" Oct 5 05:46:24 localhost admiring_dirac[291190]: ], Oct 5 05:46:24 localhost admiring_dirac[291190]: "sys_api": { Oct 5 05:46:24 localhost admiring_dirac[291190]: "actuators": null, Oct 5 05:46:24 localhost admiring_dirac[291190]: "device_nodes": "sr0", Oct 5 05:46:24 localhost admiring_dirac[291190]: "human_readable_size": "482.00 KB", Oct 5 05:46:24 localhost admiring_dirac[291190]: "id_bus": "ata", Oct 5 05:46:24 localhost admiring_dirac[291190]: "model": "QEMU DVD-ROM", Oct 5 05:46:24 localhost admiring_dirac[291190]: "nr_requests": "2", Oct 5 05:46:24 localhost admiring_dirac[291190]: "partitions": {}, Oct 5 05:46:24 localhost admiring_dirac[291190]: "path": "/dev/sr0", Oct 5 05:46:24 localhost admiring_dirac[291190]: "removable": "1", Oct 5 05:46:24 localhost admiring_dirac[291190]: "rev": "2.5+", Oct 5 05:46:24 localhost admiring_dirac[291190]: "ro": "0", Oct 5 05:46:24 localhost admiring_dirac[291190]: "rotational": "1", Oct 5 05:46:24 localhost admiring_dirac[291190]: "sas_address": "", Oct 5 05:46:24 localhost admiring_dirac[291190]: "sas_device_handle": "", Oct 5 05:46:24 localhost admiring_dirac[291190]: "scheduler_mode": "mq-deadline", Oct 5 05:46:24 localhost admiring_dirac[291190]: "sectors": 0, Oct 5 05:46:24 localhost admiring_dirac[291190]: "sectorsize": "2048", Oct 5 05:46:24 localhost admiring_dirac[291190]: "size": 493568.0, Oct 5 05:46:24 localhost admiring_dirac[291190]: "support_discard": "0", Oct 5 05:46:24 localhost admiring_dirac[291190]: "type": "disk", Oct 5 05:46:24 localhost admiring_dirac[291190]: "vendor": "QEMU" Oct 5 05:46:24 localhost admiring_dirac[291190]: } Oct 5 05:46:24 localhost admiring_dirac[291190]: } Oct 5 05:46:24 localhost admiring_dirac[291190]: ] Oct 5 05:46:25 localhost systemd[1]: libpod-378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2.scope: Deactivated successfully. Oct 5 05:46:25 localhost podman[291173]: 2025-10-05 09:46:25.018895062 +0000 UTC m=+1.104334380 container died 378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 5 05:46:25 localhost systemd[1]: tmp-crun.IZpD4K.mount: Deactivated successfully. Oct 5 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-88389b11288a6ef589905028de48f3b363c80b4dc651919dab733cc5b39959b4-merged.mount: Deactivated successfully. Oct 5 05:46:25 localhost podman[293165]: 2025-10-05 09:46:25.100014412 +0000 UTC m=+0.074492311 container remove 378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, version=7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph) Oct 5 05:46:25 localhost systemd[1]: libpod-conmon-378223ac54ff2976388f3b937711ef0ad89fa531c3bc783b7f68eebe3f78c2f2.scope: Deactivated successfully. Oct 5 05:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:46:25 localhost podman[293178]: 2025-10-05 09:46:25.227889094 +0000 UTC m=+0.091419660 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 05:46:25 localhost podman[293178]: 2025-10-05 09:46:25.262878638 +0000 UTC m=+0.126409244 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:25 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:46:25 localhost python3.9[293284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657584.4322453-3916-123214050241656/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:26 localhost nova_compute[238314]: 2025-10-05 09:46:26.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:26 localhost nova_compute[238314]: 2025-10-05 09:46:26.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:26 localhost nova_compute[238314]: 2025-10-05 09:46:26.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:46:26 localhost nova_compute[238314]: 2025-10-05 09:46:26.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:26 localhost nova_compute[238314]: 2025-10-05 09:46:26.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:26 localhost nova_compute[238314]: 2025-10-05 09:46:26.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:26 localhost python3.9[293392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:26 localhost python3.9[293447]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:27 localhost python3.9[293555]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:28 localhost python3.9[293641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657587.0929224-3916-63018464768482/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:28 localhost python3.9[293749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:29 localhost python3.9[293835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657588.2846816-3916-133664117965522/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=7ac938a335a3d7c35e640d8a23d0622f34c4ef39 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:29 localhost python3.9[293943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:30 localhost python3.9[294029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657589.5614455-3916-166704556567024/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:46:30 localhost podman[294047]: 2025-10-05 09:46:30.678674582 +0000 UTC m=+0.084529423 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:46:30 localhost podman[294047]: 2025-10-05 09:46:30.70869236 +0000 UTC m=+0.114547161 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 05:46:30 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:46:31 localhost python3.9[294158]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:31 localhost nova_compute[238314]: 2025-10-05 09:46:31.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:31 localhost nova_compute[238314]: 2025-10-05 09:46:31.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:31 localhost nova_compute[238314]: 2025-10-05 09:46:31.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:46:31 localhost nova_compute[238314]: 2025-10-05 09:46:31.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:31 localhost nova_compute[238314]: 2025-10-05 09:46:31.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:31 localhost nova_compute[238314]: 2025-10-05 09:46:31.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:31 localhost python3.9[294268]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:32 localhost python3.9[294378]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:33 localhost python3.9[294490]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:34 localhost python3.9[294598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:35 localhost python3.9[294708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:46:35 localhost systemd[1]: tmp-crun.B7hh1C.mount: Deactivated successfully. Oct 5 05:46:35 localhost podman[294711]: 2025-10-05 09:46:35.662657715 +0000 UTC m=+0.068582539 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:46:35 localhost podman[294711]: 2025-10-05 09:46:35.72817359 +0000 UTC m=+0.134098394 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:46:35 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:46:36 localhost python3.9[294788]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:36 localhost nova_compute[238314]: 2025-10-05 09:46:36.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:36 localhost nova_compute[238314]: 2025-10-05 09:46:36.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:37 localhost python3.9[294896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:37 localhost python3.9[294951]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:38 localhost python3.9[295061]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.833 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.834 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.839 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f73825cd-81c1-42e7-ba60-26498dd5a913', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.835101', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '308e869c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '6bcdc485bbd4db9dfdd7d2001890a225363cafa68491eadefccd114035e17582'}]}, 'timestamp': '2025-10-05 09:46:38.840520', '_unique_id': '90a4ce81610f4e998c979d4732eeb895'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.842 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.843 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.866 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 59810000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e3f1c1fe-8c8a-493c-8199-0d98a7cb30ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 59810000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:46:38.843594', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '3092a754-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.09068813, 'message_signature': '6c0af253b480eec3e1fd68d3dcd062fb616a969c664089f3d1f9dfc3f9c131e1'}]}, 'timestamp': '2025-10-05 09:46:38.867572', '_unique_id': 'ff99e8b11d5b40a2a677255be5b7b13c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.870 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1213559769 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.896 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 162365672 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '52425c1f-740d-4035-af2e-a3e69a93e4c3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1213559769, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.870221', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '309704e8-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '8d092892bfbbf7b4b3772408eb2cb3f7b377739e6adc592e38ea4e3569a32ef9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 162365672, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.870221', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '30971654-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '973d891f8ccb64e29128e7573e8cc625d299a3fc06cbf6a4bdc37f0251010be3'}]}, 'timestamp': '2025-10-05 09:46:38.896530', '_unique_id': 'd27a2d2e660e4a7da7fd2ea359c13991'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.898 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.898 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 274779002 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.899 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 31348051 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '76c75125-72a4-4f56-939f-c19509e10424', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 274779002, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.898927', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '30978738-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': 'ddc708841160341fea42c7231de4eb340125a91d5505cb7d434a5e59082b6d0c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31348051, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.898927', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '309799bc-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '1226b8ef34520ddb82191ce6bdf6958b5e7f962aae4ab5280f6d9c442dcc16bc'}]}, 'timestamp': '2025-10-05 09:46:38.899884', '_unique_id': '4aa915f6f9cf43a6b1f84756c56b0a3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.900 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.902 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.902 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.902 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 87 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '92e0537c-f91e-4411-90c1-2dcf7557b59b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 87, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.902277', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '30980b72-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '6c983bf950a55213e39257d99a634b77fa63dbcc17ce9e45a6e87b75eaabd206'}]}, 'timestamp': '2025-10-05 09:46:38.902802', '_unique_id': '7a782767d53e41948d1c5c1ff0793957'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.904 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.904 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '120242b3-a8ec-40d0-a010-ce098c08cacd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.904954', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '309871a2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '5d87fb556fc45db987e94661138aba9dc97f6b3f949c8001af3b7f2371ea5a36'}]}, 'timestamp': '2025-10-05 09:46:38.905451', '_unique_id': 'cafd06e3a60d44d58d7dbc3af1f87926'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.907 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.907 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 52.31640625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c528b044-d4b3-42d6-b063-6922434d2a1c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.31640625, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:46:38.907598', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '3098d8cc-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.09068813, 'message_signature': '5dd1be36e487cf50a4fb411005285d1331b530daea569f23dc7543e87bded7d4'}]}, 'timestamp': '2025-10-05 09:46:38.908038', '_unique_id': '31be953b09414a63935ad9f026fe2937'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.910 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af0e593c-70f9-4746-a82e-31803833ab7c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.910255', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '309bd9b4-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.134169965, 'message_signature': 'a3f1a9b7f5c6941762b9516e62f84843edda09d4139d66a2a54d4022ae498b86'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.910255', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '309bea8a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.134169965, 'message_signature': '1c2d14794f40901351adbf485a72eab1d502a264092c18cd49c2e47bd7405ea2'}]}, 'timestamp': '2025-10-05 09:46:38.928219', '_unique_id': '2c5bb8ecce264bd1bb6a4f1a6b690c17'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.930 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.930 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 73912320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '873ac5f0-b49d-4a38-bdd4-7d7414e988fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73912320, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.930692', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '309c5f1a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': 'd0a7bf4039b4250548d843d52c75028efc7be961c2c6e94c501b0105a5ba0ae5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.930692', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '309c748c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': 'a196731bd196025f375003b106070300f265a21cc30b59c78acd34d0118b9f82'}]}, 'timestamp': '2025-10-05 09:46:38.931680', '_unique_id': '79538b3a30b043f8a5d3e939a9ff0d8e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.933 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.934 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.934 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.934 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ab997f83-b912-4030-b94a-c70bcfac2910', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.934306', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '309cedfe-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '12e11658c9bd9986729d164aac399244ce26455297564f3c09b4cce9e7b93f25'}]}, 'timestamp': '2025-10-05 09:46:38.934867', '_unique_id': 'a2a245df41284ab095c0c7c93c774bf7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '78a24802-87cc-499d-8b19-806599ce0e48', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.937113', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '309d597e-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.134169965, 'message_signature': '7c7d891410be6b4a14a9c315d04593e2fb18f559de96ed9e15059d36920c8473'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.937113', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '309d6b3a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.134169965, 'message_signature': '414e2a42bc370556923f41424054c90f387bb4cd02c668f3d0a4709b0fe98b22'}]}, 'timestamp': '2025-10-05 09:46:38.937988', '_unique_id': 'b4c4f6bc01ae4d1291bbea5271f6bc1b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.940 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '45f09b09-6326-4230-aa77-129052cb7d4f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.940170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '309dd25a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '519352f07af587fe38b3b3c7917dbaa323da72fd90e2aa1cae9bd3ed5935a485'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.940170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '309de2b8-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '9a5a7a7f4792c3511a8d3092daa85c5bd52df6f857545a35f7077a973a323c40'}]}, 'timestamp': '2025-10-05 09:46:38.941049', '_unique_id': '2abfb5c28dd84fa09935579edd163848'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '09a8da77-a175-4932-b999-3f692248559f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.943200', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '309e4906-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': 'ada7dc3874b2cedeeccae844c6fae51c6042d1bea4ff151f26d8a250bd1c9919'}]}, 'timestamp': '2025-10-05 09:46:38.943698', '_unique_id': '0aea4784776645bcaf99510cea67c279'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01792ff4-7878-45ab-b333-e7da6c7eebce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.945893', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '309eb0c6-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': 'b4844f82b6aae2e98711aa16a2830552db887d524b55f44a546783a3c74e421a'}]}, 'timestamp': '2025-10-05 09:46:38.946354', '_unique_id': 'a4eaf3701e6d44a7a49504c8e8591cbf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.948 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 9228 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '648bd400-048d-48c6-bf93-581cabbd666d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9228, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.948585', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '309f19c6-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '05e9c802a357c03eb0a4f090f9e7a36fd40aed3b9063524efc9d265c559e5cb6'}]}, 'timestamp': '2025-10-05 09:46:38.949038', '_unique_id': '1dfc0fbd89fc469eb37620eca771b655'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e7a6e69-cf6c-426b-988a-dd2271a68601', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.951178', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '309f8046-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': 'b7a80ccf125bcb838ca5252b7b6a0fd84a11450c0cae1b7811509a31fd84b01e'}]}, 'timestamp': '2025-10-05 09:46:38.951668', '_unique_id': '511ea57c2b3442df80371782e449b850'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.953 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 574 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '12c6d04f-49e9-45ef-8275-b7752f631d55', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 574, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.953830', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '309fe6b2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '519cbbce1161a4a24dad9e761c0f192aea0301e4dd5b57949985802bd6c5045a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.953830', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '309ff81e-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '783ba745b0eb89fab61cc15272a8615a2a436546b1de66bac0fcb219223b629a'}]}, 'timestamp': '2025-10-05 09:46:38.954705', '_unique_id': '7bf13e2c8a704b11904505c3d4658912'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.956 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3b8553f1-58b5-48d3-8809-79cd44a5f9fc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.957100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '30a0666e-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.134169965, 'message_signature': '3b23c269c67837ef93e82627ca4fa5d39954f10f83df29512c4cc69279653a90'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.957100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '30a07820-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.134169965, 'message_signature': '4d7760df9adaf1e8192dce834855a2fa94be505c33db2450bfbb3878f17e3817'}]}, 'timestamp': '2025-10-05 09:46:38.957980', '_unique_id': '4f92bc9fc5d2426e9e6b62f89a549be7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.960 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.960 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.960 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '786be4bc-d192-4a34-b6b6-7f2f91a66b1d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:46:38.960232', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '30a0dd60-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '3df05f59af0cb8df3af1e9a70a9f05efbe13bc8dbe9d89c17839381830a57efd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:46:38.960232', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '30a0e864-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.094132574, 'message_signature': '3ec276eccea5e5db070dd12280acced972df4f9e769e040679ca88f7f0ed6f46'}]}, 'timestamp': '2025-10-05 09:46:38.960778', '_unique_id': 'd42a09d164004a7d9f9bf7a16b213992'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd68be506-d53a-40b8-b6b6-773b6c533d4d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.962093', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '30a12662-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '7428130bcee574768681f821fe1304950b9988714954939422364e0782d7da04'}]}, 'timestamp': '2025-10-05 09:46:38.962418', '_unique_id': 'dd018332b184455b8855ca51f327c3ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.962 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.963 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.963 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7d720de4-7738-453c-a356-0ec60d57044b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:46:38.963734', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '30a1676c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11078.059018948, 'message_signature': '491239e898ad0c87f21a42c01a57e96975a3073ac7cab1655a8d34abb445f22f'}]}, 'timestamp': '2025-10-05 09:46:38.964080', '_unique_id': 'bef96525f21b4cd09c0c587abb2af78e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:46:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:46:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 05:46:39 localhost python3.9[295171]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:46:40 localhost podman[295281]: 2025-10-05 09:46:40.357695038 +0000 UTC m=+0.083035792 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm) Oct 5 05:46:40 localhost podman[295281]: 2025-10-05 09:46:40.371905725 +0000 UTC m=+0.097246479 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:46:40 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:46:40 localhost python3[295282]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:46:40 localhost python3[295282]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0d460c957a79c0fa941447cb00e5ab934f0ccc1442862d4e417ff427bd26aed9",#012 "Digest": "sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:32:21.432647731Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207527293,#012 "VirtualSize": 1207527293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:6a39f36d67f67acbd99daa43f5f54c2ceabda19dd25b824285c9338b74a7494e",#012 "sha256:9a26e1dd0ae990be1ae7a87aaaac389265f77f7100ea3ac633d95d89956449a4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 5 05:46:41 localhost nova_compute[238314]: 2025-10-05 09:46:41.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:41 localhost nova_compute[238314]: 2025-10-05 09:46:41.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:41 localhost python3.9[295466]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:42 localhost python3.9[295578]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Oct 5 05:46:43 localhost nova_compute[238314]: 2025-10-05 09:46:43.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8491 DF PROTO=TCP SPT=44174 DPT=9102 SEQ=2245447848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF2333E0000000001030307) Oct 5 05:46:43 localhost python3.9[295688]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:46:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:46:44 localhost nova_compute[238314]: 2025-10-05 09:46:44.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:44 localhost nova_compute[238314]: 2025-10-05 09:46:44.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:44 localhost nova_compute[238314]: 2025-10-05 09:46:44.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:46:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8492 DF PROTO=TCP SPT=44174 DPT=9102 SEQ=2245447848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF2375D0000000001030307) Oct 5 05:46:44 localhost systemd[1]: tmp-crun.XrDwcf.mount: Deactivated successfully. Oct 5 05:46:44 localhost podman[295799]: 2025-10-05 09:46:44.480763101 +0000 UTC m=+0.099413208 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:46:44 localhost podman[295799]: 2025-10-05 09:46:44.495508573 +0000 UTC m=+0.114158720 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, version=9.6, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:46:44 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:46:44 localhost python3[295798]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:46:44 localhost python3[295798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0d460c957a79c0fa941447cb00e5ab934f0ccc1442862d4e417ff427bd26aed9",#012 "Digest": "sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:32:21.432647731Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207527293,#012 "VirtualSize": 1207527293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:6a39f36d67f67acbd99daa43f5f54c2ceabda19dd25b824285c9338b74a7494e",#012 "sha256:9a26e1dd0ae990be1ae7a87aaaac389265f77f7100ea3ac633d95d89956449a4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 5 05:46:45 localhost nova_compute[238314]: 2025-10-05 09:46:45.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:46:45 localhost podman[295993]: 2025-10-05 09:46:45.608071126 +0000 UTC m=+0.090353911 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:46:45 localhost podman[295993]: 2025-10-05 09:46:45.617899475 +0000 UTC m=+0.100182250 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:46:45 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:46:45 localhost python3.9[295992]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.372 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.376 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4993-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8493 DF PROTO=TCP SPT=44174 DPT=9102 SEQ=2245447848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF23F5D0000000001030307) Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.494 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.494 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.495 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.495 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:46:46 localhost nova_compute[238314]: 2025-10-05 09:46:46.495 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:46:47 localhost python3.9[296146]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.018 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.084 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.084 2 DEBUG nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.317 2 WARNING nova.virt.libvirt.driver [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.319 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12061MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.320 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.320 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.401 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.402 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.403 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.451 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:46:47 localhost python3.9[296258]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657607.0848124-4552-105515800339410/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.899 2 DEBUG oslo_concurrency.processutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.906 2 DEBUG nova.compute.provider_tree [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.928 2 DEBUG nova.scheduler.client.report [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.930 2 DEBUG nova.compute.resource_tracker [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:46:47 localhost nova_compute[238314]: 2025-10-05 09:46:47.931 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:46:48 localhost python3.9[296334]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:46:48 localhost nova_compute[238314]: 2025-10-05 09:46:48.933 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.377 2 DEBUG oslo_service.periodic_task [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.378 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:46:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.595 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.596 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.596 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.596 2 DEBUG nova.objects.instance [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:46:49 localhost podman[296354]: 2025-10-05 09:46:49.668499803 +0000 UTC m=+0.080311059 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:46:49 localhost podman[296354]: 2025-10-05 09:46:49.703077015 +0000 UTC m=+0.114888241 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:46:49 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.934 2 DEBUG nova.network.neutron [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.950 2 DEBUG oslo_concurrency.lockutils [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:46:49 localhost nova_compute[238314]: 2025-10-05 09:46:49.950 2 DEBUG nova.compute.manager [None req-025d61a7-ae7a-4b99-8586-7ec97b068331 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:46:50 localhost python3.9[296467]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8494 DF PROTO=TCP SPT=44174 DPT=9102 SEQ=2245447848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF24F1D0000000001030307) Oct 5 05:46:51 localhost python3.9[296575]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:51 localhost podman[248506]: time="2025-10-05T09:46:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:46:51 localhost podman[248506]: @ - - [05/Oct/2025:09:46:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138317 "" "Go-http-client/1.1" Oct 5 05:46:51 localhost nova_compute[238314]: 2025-10-05 09:46:51.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:51 localhost nova_compute[238314]: 2025-10-05 09:46:51.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:51 localhost podman[248506]: @ - - [05/Oct/2025:09:46:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17347 "" "Go-http-client/1.1" Oct 5 05:46:52 localhost openstack_network_exporter[250601]: ERROR 09:46:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:52 localhost openstack_network_exporter[250601]: ERROR 09:46:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:52 localhost openstack_network_exporter[250601]: ERROR 09:46:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:46:52 localhost openstack_network_exporter[250601]: ERROR 09:46:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:46:52 localhost openstack_network_exporter[250601]: Oct 5 05:46:52 localhost openstack_network_exporter[250601]: ERROR 09:46:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:46:52 localhost openstack_network_exporter[250601]: Oct 5 05:46:52 localhost python3.9[296683]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:53 localhost python3.9[296793]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:46:53 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 105.7 (352 of 333 items), suggesting rotation. Oct 5 05:46:53 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:46:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:46:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:46:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:46:54 localhost podman[296928]: 2025-10-05 09:46:54.171517317 +0000 UTC m=+0.088345387 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:46:54 localhost podman[296928]: 2025-10-05 09:46:54.212074922 +0000 UTC m=+0.128902982 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 05:46:54 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:46:54 localhost python3.9[296927]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:46:54 localhost systemd[1]: Stopping nova_compute container... Oct 5 05:46:54 localhost systemd[1]: tmp-crun.Z38TIF.mount: Deactivated successfully. Oct 5 05:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:46:55 localhost systemd[1]: tmp-crun.S9xTRD.mount: Deactivated successfully. Oct 5 05:46:55 localhost podman[296963]: 2025-10-05 09:46:55.671836603 +0000 UTC m=+0.084370759 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:46:55 localhost podman[296963]: 2025-10-05 09:46:55.70626562 +0000 UTC m=+0.118799756 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:46:55 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.766 2 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.769 2 DEBUG oslo_concurrency.lockutils [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.769 2 DEBUG oslo_concurrency.lockutils [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:46:56 localhost nova_compute[238314]: 2025-10-05 09:46:56.769 2 DEBUG oslo_concurrency.lockutils [None req-e9768d56-d285-4384-a905-1ee60891846d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:46:57 localhost journal[207037]: End of file while reading data: Input/output error Oct 5 05:46:57 localhost systemd[1]: libpod-dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4.scope: Deactivated successfully. Oct 5 05:46:57 localhost systemd[1]: libpod-dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4.scope: Consumed 22.065s CPU time. Oct 5 05:46:57 localhost podman[296951]: 2025-10-05 09:46:57.136864357 +0000 UTC m=+2.700187828 container died dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Oct 5 05:46:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4-userdata-shm.mount: Deactivated successfully. Oct 5 05:46:57 localhost systemd[1]: var-lib-containers-storage-overlay-dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c-merged.mount: Deactivated successfully. Oct 5 05:46:57 localhost podman[296951]: 2025-10-05 09:46:57.27835607 +0000 UTC m=+2.841679491 container cleanup dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:46:57 localhost podman[296951]: nova_compute Oct 5 05:46:57 localhost podman[296995]: 2025-10-05 09:46:57.366780299 +0000 UTC m=+0.055708569 container cleanup dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:46:57 localhost podman[296995]: nova_compute Oct 5 05:46:57 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Oct 5 05:46:57 localhost systemd[1]: Stopped nova_compute container. Oct 5 05:46:57 localhost systemd[1]: Starting nova_compute container... Oct 5 05:46:57 localhost systemd[1]: Started libcrun container. Oct 5 05:46:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1d9c0e6900b5a6a6ac834e821c1e5c6e1083734aa10737418ae0ece073982c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:57 localhost podman[297006]: 2025-10-05 09:46:57.507863761 +0000 UTC m=+0.110300275 container init dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:57 localhost podman[297006]: 2025-10-05 09:46:57.516887138 +0000 UTC m=+0.119323662 container start dc8539e3a634d6f460a8b2490207c5e292fe8f8a0e229be17fead81f93f497f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:46:57 localhost podman[297006]: nova_compute Oct 5 05:46:57 localhost nova_compute[297021]: + sudo -E kolla_set_configs Oct 5 05:46:57 localhost systemd[1]: Started nova_compute container. Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Validating config file Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying service configuration files Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /etc/ceph Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Creating directory /etc/ceph Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/ceph Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Writing out command to execute Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:57 localhost nova_compute[297021]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:46:57 localhost nova_compute[297021]: ++ cat /run_command Oct 5 05:46:57 localhost nova_compute[297021]: + CMD=nova-compute Oct 5 05:46:57 localhost nova_compute[297021]: + ARGS= Oct 5 05:46:57 localhost nova_compute[297021]: + sudo kolla_copy_cacerts Oct 5 05:46:57 localhost nova_compute[297021]: + [[ ! -n '' ]] Oct 5 05:46:57 localhost nova_compute[297021]: + . kolla_extend_start Oct 5 05:46:57 localhost nova_compute[297021]: Running command: 'nova-compute' Oct 5 05:46:57 localhost nova_compute[297021]: + echo 'Running command: '\''nova-compute'\''' Oct 5 05:46:57 localhost nova_compute[297021]: + umask 0022 Oct 5 05:46:57 localhost nova_compute[297021]: + exec nova-compute Oct 5 05:46:58 localhost python3.9[297142]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:46:58 localhost systemd[1]: Started libpod-conmon-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710.scope. Oct 5 05:46:58 localhost systemd[1]: Started libcrun container. Oct 5 05:46:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:58 localhost podman[297169]: 2025-10-05 09:46:58.721158309 +0000 UTC m=+0.150165991 container init e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, tcib_managed=true) Oct 5 05:46:58 localhost podman[297169]: 2025-10-05 09:46:58.731743337 +0000 UTC m=+0.160751019 container start e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS) Oct 5 05:46:58 localhost python3.9[297142]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Applying nova statedir ownership Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1 already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1 to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/2b20c302-a8d1-4ee0-990b-24973ca23df1/console.log Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/36c8e772b4cca487d730e1df6ad67360170775c3 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-36c8e772b4cca487d730e1df6ad67360170775c3 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928 Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7bff446e28da7b7609613334d4f266c2377bdec4e8e9a595eeb621178e5df9fb Oct 5 05:46:58 localhost nova_compute_init[297190]: INFO:nova_statedir:Nova statedir ownership complete Oct 5 05:46:58 localhost systemd[1]: libpod-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710.scope: Deactivated successfully. Oct 5 05:46:58 localhost podman[297201]: 2025-10-05 09:46:58.857092092 +0000 UTC m=+0.045273494 container died e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:46:58 localhost podman[297201]: 2025-10-05 09:46:58.942062916 +0000 UTC m=+0.130244288 container cleanup e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:46:58 localhost systemd[1]: libpod-conmon-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710.scope: Deactivated successfully. Oct 5 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay-e59bfa73ece6e48d2e50d8d1487e9b2b63ac3b08462cb3a0828f618e73cc1808-merged.mount: Deactivated successfully. Oct 5 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e26ede994012ea73036f672f269d5566f3f005986dfd8f2aaede3ee76b1ec710-userdata-shm.mount: Deactivated successfully. Oct 5 05:46:59 localhost nova_compute[297021]: 2025-10-05 09:46:59.278 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:46:59 localhost nova_compute[297021]: 2025-10-05 09:46:59.278 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:46:59 localhost nova_compute[297021]: 2025-10-05 09:46:59.278 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:46:59 localhost nova_compute[297021]: 2025-10-05 09:46:59.279 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 5 05:46:59 localhost nova_compute[297021]: 2025-10-05 09:46:59.389 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:46:59 localhost nova_compute[297021]: 2025-10-05 09:46:59.413 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:46:59 localhost systemd[1]: session-62.scope: Deactivated successfully. Oct 5 05:46:59 localhost systemd[1]: session-62.scope: Consumed 1min 49.901s CPU time. Oct 5 05:46:59 localhost systemd-logind[760]: Session 62 logged out. Waiting for processes to exit. Oct 5 05:46:59 localhost systemd-logind[760]: Removed session 62. Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.098 2 INFO nova.virt.driver [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.205 2 INFO nova.compute.provider_config [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.215 2 DEBUG oslo_concurrency.lockutils [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.215 2 DEBUG oslo_concurrency.lockutils [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.216 2 DEBUG oslo_concurrency.lockutils [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.216 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.216 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.216 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.216 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.216 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.217 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.218 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] console_host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.219 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.220 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] host = np0005471150.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.221 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.222 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.222 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.222 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.222 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.222 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.222 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.223 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.224 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.225 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.226 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.226 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.226 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.226 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.226 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.226 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.227 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.228 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.229 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.230 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.231 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.232 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.233 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.234 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.235 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.236 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.237 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.238 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.239 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.240 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.241 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.242 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.243 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.244 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.245 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.246 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.247 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.248 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.249 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.250 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.251 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.252 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.253 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.254 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.255 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.256 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.257 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.257 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.257 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.257 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.257 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.257 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.258 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.259 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.260 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.261 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.262 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.263 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.264 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.265 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.265 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.265 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.265 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.265 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.265 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.266 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.266 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.266 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.266 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.266 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.266 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.267 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.268 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.269 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.270 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.270 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.270 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.270 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.270 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.270 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.271 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.272 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.273 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.274 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.275 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.276 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.277 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.278 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.279 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.280 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.281 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.282 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.283 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.284 2 WARNING oslo_config.cfg [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 5 05:47:00 localhost nova_compute[297021]: live_migration_uri is deprecated for removal in favor of two other options that Oct 5 05:47:00 localhost nova_compute[297021]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 5 05:47:00 localhost nova_compute[297021]: and ``live_migration_inbound_addr`` respectively. Oct 5 05:47:00 localhost nova_compute[297021]: ). Its value may be silently ignored in the future.#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.285 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.285 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.285 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.285 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.285 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.285 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.286 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rbd_secret_uuid = 659062ac-50b4-5607-b699-3105da7f55ee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.287 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.288 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.289 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.290 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.291 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.292 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.293 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.294 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.295 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.296 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.297 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.297 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.297 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.298 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.299 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.300 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.301 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.302 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.303 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.304 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.305 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.305 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.305 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.305 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.305 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.305 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.306 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.307 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.308 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.309 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.310 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.311 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.312 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.313 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.314 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.315 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.316 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.317 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.318 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.319 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.320 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.320 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.320 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.320 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.320 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.320 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.321 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.322 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.323 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.324 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.325 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.326 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.327 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.328 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.329 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.330 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.331 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.332 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.333 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.333 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.333 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.333 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.333 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.334 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.335 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.336 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.337 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.338 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.339 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.340 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.341 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.342 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.343 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.344 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.345 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.345 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.345 2 DEBUG oslo_service.service [None req-00d128b1-091a-4aed-a737-2b9b33f25bdf - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.346 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.366 2 INFO nova.virt.node [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Determined node identity 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from /var/lib/nova/compute_id#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.367 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.367 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.367 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.367 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.379 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.382 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.382 2 INFO nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Connection event '1' reason 'None'#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.387 2 INFO nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Libvirt host capabilities Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 8a2ee9a2-7fe7-4677-a151-037462d3ba7a Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: x86_64 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v4 Oct 5 05:47:00 localhost nova_compute[297021]: AMD Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: tcp Oct 5 05:47:00 localhost nova_compute[297021]: rdma Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 16116612 Oct 5 05:47:00 localhost nova_compute[297021]: 4029153 Oct 5 05:47:00 localhost nova_compute[297021]: 0 Oct 5 05:47:00 localhost nova_compute[297021]: 0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: selinux Oct 5 05:47:00 localhost nova_compute[297021]: 0 Oct 5 05:47:00 localhost nova_compute[297021]: system_u:system_r:svirt_t:s0 Oct 5 05:47:00 localhost nova_compute[297021]: system_u:system_r:svirt_tcg_t:s0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: dac Oct 5 05:47:00 localhost nova_compute[297021]: 0 Oct 5 05:47:00 localhost nova_compute[297021]: +107:+107 Oct 5 05:47:00 localhost nova_compute[297021]: +107:+107 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: hvm Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 32 Oct 5 05:47:00 localhost nova_compute[297021]: /usr/libexec/qemu-kvm Oct 5 05:47:00 localhost nova_compute[297021]: pc-i440fx-rhel7.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: q35 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.4.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.5.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.3.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel7.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.4.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.2.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.2.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.0.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.0.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.1.0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: hvm Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 64 Oct 5 05:47:00 localhost nova_compute[297021]: /usr/libexec/qemu-kvm Oct 5 05:47:00 localhost nova_compute[297021]: pc-i440fx-rhel7.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: q35 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.4.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.5.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.3.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel7.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.4.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.2.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.2.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.0.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.0.0 Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel8.1.0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: #033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.394 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.398 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/libexec/qemu-kvm Oct 5 05:47:00 localhost nova_compute[297021]: kvm Oct 5 05:47:00 localhost nova_compute[297021]: pc-i440fx-rhel7.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: i686 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: rom Oct 5 05:47:00 localhost nova_compute[297021]: pflash Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: yes Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: AMD Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 486 Oct 5 05:47:00 localhost nova_compute[297021]: 486-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Conroe Oct 5 05:47:00 localhost nova_compute[297021]: Conroe-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-IBPB Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v4 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v1 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v2 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v6 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v7 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Penryn Oct 5 05:47:00 localhost nova_compute[297021]: Penryn-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Westmere Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v2 Oct 5 05:47:00 localhost nova_compute[297021]: athlon Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: athlon-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: kvm32 Oct 5 05:47:00 localhost nova_compute[297021]: kvm32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: n270 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: n270-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pentium Oct 5 05:47:00 localhost nova_compute[297021]: pentium-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: phenom Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: phenom-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu32 Oct 5 05:47:00 localhost nova_compute[297021]: qemu32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: file Oct 5 05:47:00 localhost nova_compute[297021]: anonymous Oct 5 05:47:00 localhost nova_compute[297021]: memfd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: disk Oct 5 05:47:00 localhost nova_compute[297021]: cdrom Oct 5 05:47:00 localhost nova_compute[297021]: floppy Oct 5 05:47:00 localhost nova_compute[297021]: lun Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: ide Oct 5 05:47:00 localhost nova_compute[297021]: fdc Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: sata Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: vnc Oct 5 05:47:00 localhost nova_compute[297021]: egl-headless Oct 5 05:47:00 localhost nova_compute[297021]: dbus Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: subsystem Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: mandatory Oct 5 05:47:00 localhost nova_compute[297021]: requisite Oct 5 05:47:00 localhost nova_compute[297021]: optional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: pci Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: random Oct 5 05:47:00 localhost nova_compute[297021]: egd Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: path Oct 5 05:47:00 localhost nova_compute[297021]: handle Oct 5 05:47:00 localhost nova_compute[297021]: virtiofs Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: tpm-tis Oct 5 05:47:00 localhost nova_compute[297021]: tpm-crb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: emulator Oct 5 05:47:00 localhost nova_compute[297021]: external Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 2.0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pty Oct 5 05:47:00 localhost nova_compute[297021]: unix Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: passt Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: isa Oct 5 05:47:00 localhost nova_compute[297021]: hyperv Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: relaxed Oct 5 05:47:00 localhost nova_compute[297021]: vapic Oct 5 05:47:00 localhost nova_compute[297021]: spinlocks Oct 5 05:47:00 localhost nova_compute[297021]: vpindex Oct 5 05:47:00 localhost nova_compute[297021]: runtime Oct 5 05:47:00 localhost nova_compute[297021]: synic Oct 5 05:47:00 localhost nova_compute[297021]: stimer Oct 5 05:47:00 localhost nova_compute[297021]: reset Oct 5 05:47:00 localhost nova_compute[297021]: vendor_id Oct 5 05:47:00 localhost nova_compute[297021]: frequencies Oct 5 05:47:00 localhost nova_compute[297021]: reenlightenment Oct 5 05:47:00 localhost nova_compute[297021]: tlbflush Oct 5 05:47:00 localhost nova_compute[297021]: ipi Oct 5 05:47:00 localhost nova_compute[297021]: avic Oct 5 05:47:00 localhost nova_compute[297021]: emsr_bitmap Oct 5 05:47:00 localhost nova_compute[297021]: xmm_input Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.401 2 DEBUG nova.virt.libvirt.volume.mount [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.404 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/libexec/qemu-kvm Oct 5 05:47:00 localhost nova_compute[297021]: kvm Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: i686 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: rom Oct 5 05:47:00 localhost nova_compute[297021]: pflash Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: yes Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: AMD Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 486 Oct 5 05:47:00 localhost nova_compute[297021]: 486-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Conroe Oct 5 05:47:00 localhost nova_compute[297021]: Conroe-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-IBPB Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v4 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v1 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v2 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v6 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v7 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Penryn Oct 5 05:47:00 localhost nova_compute[297021]: Penryn-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Westmere Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v2 Oct 5 05:47:00 localhost nova_compute[297021]: athlon Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: athlon-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: kvm32 Oct 5 05:47:00 localhost nova_compute[297021]: kvm32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: n270 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: n270-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pentium Oct 5 05:47:00 localhost nova_compute[297021]: pentium-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: phenom Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: phenom-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu32 Oct 5 05:47:00 localhost nova_compute[297021]: qemu32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: file Oct 5 05:47:00 localhost nova_compute[297021]: anonymous Oct 5 05:47:00 localhost nova_compute[297021]: memfd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: disk Oct 5 05:47:00 localhost nova_compute[297021]: cdrom Oct 5 05:47:00 localhost nova_compute[297021]: floppy Oct 5 05:47:00 localhost nova_compute[297021]: lun Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: fdc Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: sata Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: vnc Oct 5 05:47:00 localhost nova_compute[297021]: egl-headless Oct 5 05:47:00 localhost nova_compute[297021]: dbus Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: subsystem Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: mandatory Oct 5 05:47:00 localhost nova_compute[297021]: requisite Oct 5 05:47:00 localhost nova_compute[297021]: optional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: pci Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: random Oct 5 05:47:00 localhost nova_compute[297021]: egd Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: path Oct 5 05:47:00 localhost nova_compute[297021]: handle Oct 5 05:47:00 localhost nova_compute[297021]: virtiofs Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: tpm-tis Oct 5 05:47:00 localhost nova_compute[297021]: tpm-crb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: emulator Oct 5 05:47:00 localhost nova_compute[297021]: external Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 2.0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pty Oct 5 05:47:00 localhost nova_compute[297021]: unix Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: passt Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: isa Oct 5 05:47:00 localhost nova_compute[297021]: hyperv Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: relaxed Oct 5 05:47:00 localhost nova_compute[297021]: vapic Oct 5 05:47:00 localhost nova_compute[297021]: spinlocks Oct 5 05:47:00 localhost nova_compute[297021]: vpindex Oct 5 05:47:00 localhost nova_compute[297021]: runtime Oct 5 05:47:00 localhost nova_compute[297021]: synic Oct 5 05:47:00 localhost nova_compute[297021]: stimer Oct 5 05:47:00 localhost nova_compute[297021]: reset Oct 5 05:47:00 localhost nova_compute[297021]: vendor_id Oct 5 05:47:00 localhost nova_compute[297021]: frequencies Oct 5 05:47:00 localhost nova_compute[297021]: reenlightenment Oct 5 05:47:00 localhost nova_compute[297021]: tlbflush Oct 5 05:47:00 localhost nova_compute[297021]: ipi Oct 5 05:47:00 localhost nova_compute[297021]: avic Oct 5 05:47:00 localhost nova_compute[297021]: emsr_bitmap Oct 5 05:47:00 localhost nova_compute[297021]: xmm_input Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.452 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.457 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/libexec/qemu-kvm Oct 5 05:47:00 localhost nova_compute[297021]: kvm Oct 5 05:47:00 localhost nova_compute[297021]: pc-q35-rhel9.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: x86_64 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: efi Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: rom Oct 5 05:47:00 localhost nova_compute[297021]: pflash Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: yes Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: yes Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: AMD Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 486 Oct 5 05:47:00 localhost nova_compute[297021]: 486-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Conroe Oct 5 05:47:00 localhost nova_compute[297021]: Conroe-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-IBPB Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v4 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v1 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v2 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v6 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v7 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Penryn Oct 5 05:47:00 localhost nova_compute[297021]: Penryn-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Westmere Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v2 Oct 5 05:47:00 localhost nova_compute[297021]: athlon Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: athlon-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: kvm32 Oct 5 05:47:00 localhost nova_compute[297021]: kvm32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: n270 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: n270-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pentium Oct 5 05:47:00 localhost nova_compute[297021]: pentium-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: phenom Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: phenom-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu32 Oct 5 05:47:00 localhost nova_compute[297021]: qemu32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: file Oct 5 05:47:00 localhost nova_compute[297021]: anonymous Oct 5 05:47:00 localhost nova_compute[297021]: memfd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: disk Oct 5 05:47:00 localhost nova_compute[297021]: cdrom Oct 5 05:47:00 localhost nova_compute[297021]: floppy Oct 5 05:47:00 localhost nova_compute[297021]: lun Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: fdc Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: sata Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: vnc Oct 5 05:47:00 localhost nova_compute[297021]: egl-headless Oct 5 05:47:00 localhost nova_compute[297021]: dbus Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: subsystem Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: mandatory Oct 5 05:47:00 localhost nova_compute[297021]: requisite Oct 5 05:47:00 localhost nova_compute[297021]: optional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: pci Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: random Oct 5 05:47:00 localhost nova_compute[297021]: egd Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: path Oct 5 05:47:00 localhost nova_compute[297021]: handle Oct 5 05:47:00 localhost nova_compute[297021]: virtiofs Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: tpm-tis Oct 5 05:47:00 localhost nova_compute[297021]: tpm-crb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: emulator Oct 5 05:47:00 localhost nova_compute[297021]: external Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 2.0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pty Oct 5 05:47:00 localhost nova_compute[297021]: unix Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: passt Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: isa Oct 5 05:47:00 localhost nova_compute[297021]: hyperv Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: relaxed Oct 5 05:47:00 localhost nova_compute[297021]: vapic Oct 5 05:47:00 localhost nova_compute[297021]: spinlocks Oct 5 05:47:00 localhost nova_compute[297021]: vpindex Oct 5 05:47:00 localhost nova_compute[297021]: runtime Oct 5 05:47:00 localhost nova_compute[297021]: synic Oct 5 05:47:00 localhost nova_compute[297021]: stimer Oct 5 05:47:00 localhost nova_compute[297021]: reset Oct 5 05:47:00 localhost nova_compute[297021]: vendor_id Oct 5 05:47:00 localhost nova_compute[297021]: frequencies Oct 5 05:47:00 localhost nova_compute[297021]: reenlightenment Oct 5 05:47:00 localhost nova_compute[297021]: tlbflush Oct 5 05:47:00 localhost nova_compute[297021]: ipi Oct 5 05:47:00 localhost nova_compute[297021]: avic Oct 5 05:47:00 localhost nova_compute[297021]: emsr_bitmap Oct 5 05:47:00 localhost nova_compute[297021]: xmm_input Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.537 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/libexec/qemu-kvm Oct 5 05:47:00 localhost nova_compute[297021]: kvm Oct 5 05:47:00 localhost nova_compute[297021]: pc-i440fx-rhel7.6.0 Oct 5 05:47:00 localhost nova_compute[297021]: x86_64 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: rom Oct 5 05:47:00 localhost nova_compute[297021]: pflash Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: yes Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: no Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: on Oct 5 05:47:00 localhost nova_compute[297021]: off Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: AMD Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 486 Oct 5 05:47:00 localhost nova_compute[297021]: 486-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Broadwell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cascadelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Conroe Oct 5 05:47:00 localhost nova_compute[297021]: Conroe-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Cooperlake-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Denverton-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Dhyana-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Genoa-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-IBPB Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Milan-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-Rome-v4 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v1 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v2 Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: EPYC-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: GraniteRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Haswell-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-noTSX Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v6 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Icelake-Server-v7 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: IvyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: KnightsMill-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Nehalem-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G1-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G4-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Opteron_G5-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Penryn Oct 5 05:47:00 localhost nova_compute[297021]: Penryn-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: SandyBridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SapphireRapids-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: SierraForest-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Client-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-noTSX-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Skylake-Server-v5 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v2 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v3 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Snowridge-v4 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Westmere Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-IBRS Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Westmere-v2 Oct 5 05:47:00 localhost nova_compute[297021]: athlon Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: athlon-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: core2duo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: coreduo-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: kvm32 Oct 5 05:47:00 localhost nova_compute[297021]: kvm32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64 Oct 5 05:47:00 localhost nova_compute[297021]: kvm64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: n270 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: n270-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pentium Oct 5 05:47:00 localhost nova_compute[297021]: pentium-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2 Oct 5 05:47:00 localhost nova_compute[297021]: pentium2-v1 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3 Oct 5 05:47:00 localhost nova_compute[297021]: pentium3-v1 Oct 5 05:47:00 localhost nova_compute[297021]: phenom Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: phenom-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu32 Oct 5 05:47:00 localhost nova_compute[297021]: qemu32-v1 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64 Oct 5 05:47:00 localhost nova_compute[297021]: qemu64-v1 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: file Oct 5 05:47:00 localhost nova_compute[297021]: anonymous Oct 5 05:47:00 localhost nova_compute[297021]: memfd Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: disk Oct 5 05:47:00 localhost nova_compute[297021]: cdrom Oct 5 05:47:00 localhost nova_compute[297021]: floppy Oct 5 05:47:00 localhost nova_compute[297021]: lun Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: ide Oct 5 05:47:00 localhost nova_compute[297021]: fdc Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: sata Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: vnc Oct 5 05:47:00 localhost nova_compute[297021]: egl-headless Oct 5 05:47:00 localhost nova_compute[297021]: dbus Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: subsystem Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: mandatory Oct 5 05:47:00 localhost nova_compute[297021]: requisite Oct 5 05:47:00 localhost nova_compute[297021]: optional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: pci Oct 5 05:47:00 localhost nova_compute[297021]: scsi Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: virtio Oct 5 05:47:00 localhost nova_compute[297021]: virtio-transitional Oct 5 05:47:00 localhost nova_compute[297021]: virtio-non-transitional Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: random Oct 5 05:47:00 localhost nova_compute[297021]: egd Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: path Oct 5 05:47:00 localhost nova_compute[297021]: handle Oct 5 05:47:00 localhost nova_compute[297021]: virtiofs Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: tpm-tis Oct 5 05:47:00 localhost nova_compute[297021]: tpm-crb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: emulator Oct 5 05:47:00 localhost nova_compute[297021]: external Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: 2.0 Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: usb Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: pty Oct 5 05:47:00 localhost nova_compute[297021]: unix Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: qemu Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: builtin Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: default Oct 5 05:47:00 localhost nova_compute[297021]: passt Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: isa Oct 5 05:47:00 localhost nova_compute[297021]: hyperv Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: relaxed Oct 5 05:47:00 localhost nova_compute[297021]: vapic Oct 5 05:47:00 localhost nova_compute[297021]: spinlocks Oct 5 05:47:00 localhost nova_compute[297021]: vpindex Oct 5 05:47:00 localhost nova_compute[297021]: runtime Oct 5 05:47:00 localhost nova_compute[297021]: synic Oct 5 05:47:00 localhost nova_compute[297021]: stimer Oct 5 05:47:00 localhost nova_compute[297021]: reset Oct 5 05:47:00 localhost nova_compute[297021]: vendor_id Oct 5 05:47:00 localhost nova_compute[297021]: frequencies Oct 5 05:47:00 localhost nova_compute[297021]: reenlightenment Oct 5 05:47:00 localhost nova_compute[297021]: tlbflush Oct 5 05:47:00 localhost nova_compute[297021]: ipi Oct 5 05:47:00 localhost nova_compute[297021]: avic Oct 5 05:47:00 localhost nova_compute[297021]: emsr_bitmap Oct 5 05:47:00 localhost nova_compute[297021]: xmm_input Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: Oct 5 05:47:00 localhost nova_compute[297021]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.582 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.582 2 INFO nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Secure Boot support detected#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.584 2 INFO nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.584 2 INFO nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.592 2 DEBUG nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.612 2 INFO nova.virt.node [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Determined node identity 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from /var/lib/nova/compute_id#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.623 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Verified node 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c matches my host np0005471150.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.641 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.644 2 DEBUG nova.virt.libvirt.vif [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T08:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005471150.localdomain',hostname='test',id=2,image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T08:30:14Z,launched_on='np0005471150.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005471150.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='8b36437b65444bcdac75beef77b6981e',ramdisk_id='',reservation_id='r-dff44nva',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-10-05T08:30:14Z,user_data=None,user_id='8d17cd5027274bc5883e2354d4ddec6b',uuid=2b20c302-a8d1-4ee0-990b-24973ca23df1,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.645 2 DEBUG nova.network.os_vif_util [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Converting VIF {"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.645 2 DEBUG nova.network.os_vif_util [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.646 2 DEBUG os_vif [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.671 2 DEBUG ovsdbapp.backend.ovs_idl [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.671 2 DEBUG ovsdbapp.backend.ovs_idl [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.671 2 DEBUG ovsdbapp.backend.ovs_idl [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:47:00 localhost nova_compute[297021]: 2025-10-05 09:47:00.694 2 INFO oslo.privsep.daemon [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpq9lulqh5/privsep.sock']#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.296 2 INFO oslo.privsep.daemon [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.196 40 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.200 40 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.204 40 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.204 40 INFO oslo.privsep.daemon [-] privsep daemon running as pid 40#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4db5c636-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4db5c636-30, col_values=(('external_ids', {'iface-id': '4db5c636-3094-4e86-9093-8123489e64be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:2c:a3', 'vm-uuid': '2b20c302-a8d1-4ee0-990b-24973ca23df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.549 2 INFO os_vif [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30')#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.550 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.554 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.555 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 5 05:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:47:01 localhost podman[297277]: 2025-10-05 09:47:01.668844948 +0000 UTC m=+0.080739970 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:47:01 localhost podman[297277]: 2025-10-05 09:47:01.699601876 +0000 UTC m=+0.111496908 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:47:01 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.867 2 DEBUG oslo_concurrency.lockutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.867 2 DEBUG oslo_concurrency.lockutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.868 2 DEBUG oslo_concurrency.lockutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.868 2 DEBUG nova.compute.resource_tracker [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:47:01 localhost nova_compute[297021]: 2025-10-05 09:47:01.869 2 DEBUG oslo_concurrency.processutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.319 2 DEBUG oslo_concurrency.processutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.382 2 DEBUG nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.383 2 DEBUG nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.608 2 WARNING nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.609 2 DEBUG nova.compute.resource_tracker [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12103MB free_disk=41.83720779418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.610 2 DEBUG oslo_concurrency.lockutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:02 localhost nova_compute[297021]: 2025-10-05 09:47:02.610 2 DEBUG oslo_concurrency.lockutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.055 2 DEBUG nova.compute.resource_tracker [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.055 2 DEBUG nova.compute.resource_tracker [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.055 2 DEBUG nova.compute.resource_tracker [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.239 2 DEBUG nova.scheduler.client.report [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.262 2 DEBUG nova.scheduler.client.report [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.262 2 DEBUG nova.compute.provider_tree [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.276 2 DEBUG nova.scheduler.client.report [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.305 2 DEBUG nova.scheduler.client.report [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.348 2 DEBUG oslo_concurrency.processutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.813 2 DEBUG oslo_concurrency.processutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.819 2 DEBUG nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 5 05:47:03 localhost nova_compute[297021]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.820 2 INFO nova.virt.libvirt.host [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.822 2 DEBUG nova.compute.provider_tree [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:47:03 localhost nova_compute[297021]: 2025-10-05 09:47:03.822 2 DEBUG nova.virt.libvirt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 05:47:04 localhost nova_compute[297021]: 2025-10-05 09:47:04.117 2 DEBUG nova.scheduler.client.report [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:47:04 localhost nova_compute[297021]: 2025-10-05 09:47:04.823 2 DEBUG nova.compute.resource_tracker [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:47:04 localhost nova_compute[297021]: 2025-10-05 09:47:04.824 2 DEBUG oslo_concurrency.lockutils [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:04 localhost nova_compute[297021]: 2025-10-05 09:47:04.824 2 DEBUG nova.service [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 5 05:47:04 localhost nova_compute[297021]: 2025-10-05 09:47:04.854 2 DEBUG nova.service [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 5 05:47:04 localhost nova_compute[297021]: 2025-10-05 09:47:04.854 2 DEBUG nova.servicegroup.drivers.db [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] DB_Driver: join new ServiceGroup member np0005471150.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 5 05:47:05 localhost nova_compute[297021]: 2025-10-05 09:47:05.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:06 localhost nova_compute[297021]: 2025-10-05 09:47:06.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:47:06 localhost podman[297339]: 2025-10-05 09:47:06.676656468 +0000 UTC m=+0.087602047 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 5 05:47:06 localhost podman[297339]: 2025-10-05 09:47:06.743877779 +0000 UTC m=+0.154823318 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Oct 5 05:47:06 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:47:07 localhost nova_compute[297021]: 2025-10-05 09:47:07.856 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:07 localhost nova_compute[297021]: 2025-10-05 09:47:07.875 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Triggering sync for uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Oct 5 05:47:07 localhost nova_compute[297021]: 2025-10-05 09:47:07.876 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:07 localhost nova_compute[297021]: 2025-10-05 09:47:07.877 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:07 localhost nova_compute[297021]: 2025-10-05 09:47:07.877 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:07 localhost nova_compute[297021]: 2025-10-05 09:47:07.901 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:47:10 localhost systemd[1]: tmp-crun.vwa8DX.mount: Deactivated successfully. Oct 5 05:47:10 localhost podman[297365]: 2025-10-05 09:47:10.68697048 +0000 UTC m=+0.094659430 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 05:47:10 localhost podman[297365]: 2025-10-05 09:47:10.700018314 +0000 UTC m=+0.107707284 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:47:10 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:47:10 localhost nova_compute[297021]: 2025-10-05 09:47:10.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:11 localhost nova_compute[297021]: 2025-10-05 09:47:11.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35295 DF PROTO=TCP SPT=51758 DPT=9102 SEQ=2458371879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF2A86D0000000001030307) Oct 5 05:47:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35296 DF PROTO=TCP SPT=51758 DPT=9102 SEQ=2458371879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF2AC5D0000000001030307) Oct 5 05:47:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:47:14 localhost podman[297384]: 2025-10-05 09:47:14.681176884 +0000 UTC m=+0.084411791 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal) Oct 5 05:47:14 localhost podman[297384]: 2025-10-05 09:47:14.7206941 +0000 UTC m=+0.123929017 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:47:14 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:47:15 localhost nova_compute[297021]: 2025-10-05 09:47:15.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35297 DF PROTO=TCP SPT=51758 DPT=9102 SEQ=2458371879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF2B45D0000000001030307) Oct 5 05:47:16 localhost nova_compute[297021]: 2025-10-05 09:47:16.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:47:16 localhost podman[297404]: 2025-10-05 09:47:16.674657382 +0000 UTC m=+0.078794997 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:47:16 localhost podman[297404]: 2025-10-05 09:47:16.681857048 +0000 UTC m=+0.085994673 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:47:16 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:47:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:20.447 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:20.447 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:20.449 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35298 DF PROTO=TCP SPT=51758 DPT=9102 SEQ=2458371879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF2C41D0000000001030307) Oct 5 05:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:47:20 localhost podman[297426]: 2025-10-05 09:47:20.664987601 +0000 UTC m=+0.077885883 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:47:20 localhost podman[297426]: 2025-10-05 09:47:20.672481094 +0000 UTC m=+0.085379396 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:47:20 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:47:20 localhost nova_compute[297021]: 2025-10-05 09:47:20.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:21 localhost podman[248506]: time="2025-10-05T09:47:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:47:21 localhost podman[248506]: @ - - [05/Oct/2025:09:47:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 138317 "" "Go-http-client/1.1" Oct 5 05:47:21 localhost podman[248506]: @ - - [05/Oct/2025:09:47:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17344 "" "Go-http-client/1.1" Oct 5 05:47:21 localhost nova_compute[297021]: 2025-10-05 09:47:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:22 localhost openstack_network_exporter[250601]: ERROR 09:47:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:47:22 localhost openstack_network_exporter[250601]: ERROR 09:47:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:22 localhost openstack_network_exporter[250601]: ERROR 09:47:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:22 localhost openstack_network_exporter[250601]: ERROR 09:47:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:47:22 localhost openstack_network_exporter[250601]: Oct 5 05:47:22 localhost openstack_network_exporter[250601]: ERROR 09:47:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:47:22 localhost openstack_network_exporter[250601]: Oct 5 05:47:23 localhost nova_compute[297021]: 2025-10-05 09:47:23.861 2 DEBUG nova.compute.manager [None req-1d84492b-79f1-4c21-9f71-59e1d9497745 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:23 localhost nova_compute[297021]: 2025-10-05 09:47:23.866 2 INFO nova.compute.manager [None req-1d84492b-79f1-4c21-9f71-59e1d9497745 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Retrieving diagnostics#033[00m Oct 5 05:47:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:47:24 localhost podman[297449]: 2025-10-05 09:47:24.669515945 +0000 UTC m=+0.081406408 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:47:24 localhost podman[297449]: 2025-10-05 09:47:24.680285769 +0000 UTC m=+0.092176192 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:47:24 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:47:25 localhost nova_compute[297021]: 2025-10-05 09:47:25.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:47:25 localhost podman[297484]: 2025-10-05 09:47:25.980612446 +0000 UTC m=+0.078887490 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:47:26 localhost podman[297484]: 2025-10-05 09:47:26.01492178 +0000 UTC m=+0.113196784 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:47:26 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:47:26 localhost nova_compute[297021]: 2025-10-05 09:47:26.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.784 2 DEBUG oslo_concurrency.lockutils [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.785 2 DEBUG oslo_concurrency.lockutils [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.786 2 DEBUG nova.compute.manager [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.791 2 DEBUG nova.compute.manager [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.796 2 DEBUG nova.objects.instance [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'flavor' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:47:30 localhost nova_compute[297021]: 2025-10-05 09:47:30.840 2 DEBUG nova.virt.libvirt.driver [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Oct 5 05:47:31 localhost nova_compute[297021]: 2025-10-05 09:47:31.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:47:32 localhost podman[297571]: 2025-10-05 09:47:32.671322456 +0000 UTC m=+0.082954670 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 5 05:47:32 localhost podman[297571]: 2025-10-05 09:47:32.706793212 +0000 UTC m=+0.118425466 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:47:32 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:47:33 localhost kernel: device tap4db5c636-30 left promiscuous mode Oct 5 05:47:33 localhost NetworkManager[5981]: [1759657653.3224] device (tap4db5c636-30): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00047|binding|INFO|Releasing lport 4db5c636-3094-4e86-9093-8123489e64be from this chassis (sb_readonly=0) Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00048|binding|INFO|Setting lport 4db5c636-3094-4e86-9093-8123489e64be down in Southbound Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00049|binding|INFO|Removing iface tap4db5c636-30 ovn-installed in OVS Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.346 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:2c:a3 192.168.0.56'], port_security=['fa:16:3e:a6:2c:a3 192.168.0.56'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.56/24', 'neutron:device_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005471150.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4fbe78ed-92dd-4e52-8c97-e662f3cb3af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f49a96c-a4ec-4b07-9e41-306ef014a4cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4db5c636-3094-4e86-9093-8123489e64be) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.348 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 4db5c636-3094-4e86-9093-8123489e64be in datapath 20d6a6dc-0f38-4a89-b3fc-56befd04e92f unbound from our chassis#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.350 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20d6a6dc-0f38-4a89-b3fc-56befd04e92f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00050|ovn_bfd|INFO|Disabled BFD on interface ovn-fe3fe5-0 Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00051|ovn_bfd|INFO|Disabled BFD on interface ovn-891f35-0 Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00052|ovn_bfd|INFO|Disabled BFD on interface ovn-85ea67-0 Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00053|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.355 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc3b035-ad6c-4f24-8a66-4f29bbf73f43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.361 163434 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f namespace which is not needed anymore#033[00m Oct 5 05:47:33 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully. Oct 5 05:47:33 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 4min 444ms CPU time. Oct 5 05:47:33 localhost systemd-machined[84982]: Machine qemu-1-instance-00000002 terminated. Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost ovn_controller[157794]: 2025-10-05T09:47:33Z|00054|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost systemd[1]: libpod-f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4.scope: Deactivated successfully. Oct 5 05:47:33 localhost podman[297616]: 2025-10-05 09:47:33.540060179 +0000 UTC m=+0.068015814 container died f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1) Oct 5 05:47:33 localhost NetworkManager[5981]: [1759657653.5606] manager: (tap4db5c636-30): new Tun device (/org/freedesktop/NetworkManager/Devices/15) Oct 5 05:47:33 localhost systemd[1]: var-lib-containers-storage-overlay-9cfe7524ac8a56ed96644e8d260c683ffdbe8eccf2a9392e52f36c399e070e93-merged.mount: Deactivated successfully. Oct 5 05:47:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4-userdata-shm.mount: Deactivated successfully. Oct 5 05:47:33 localhost podman[297616]: 2025-10-05 09:47:33.683102255 +0000 UTC m=+0.211057860 container cleanup f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, version=17.1.9, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:47:33 localhost podman[297628]: 2025-10-05 09:47:33.693633051 +0000 UTC m=+0.146201443 container cleanup f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, release=1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:47:33 localhost systemd[1]: libpod-conmon-f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4.scope: Deactivated successfully. Oct 5 05:47:33 localhost podman[297654]: 2025-10-05 09:47:33.777558997 +0000 UTC m=+0.074668414 container remove f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9) Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.781 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3c4085-69f4-46ed-a4be-169951ec7ca5]: (4, ('Sun Oct 5 09:47:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f (f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4)\nf5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4\nSun Oct 5 09:47:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f (f5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4)\nf5cd0d3c9604a328b36fa645f49aaacbbde3e5dfca062d3de5a4d1679801cfb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.784 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c364f2-e141-4a58-b58b-93284c29006c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.785 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20d6a6dc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost kernel: device tap20d6a6dc-00 left promiscuous mode Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.803 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f4dc4263-53b1-441b-8e0f-39b49690322d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.818 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[5eca14c9-0dea-4c2a-a668-61a785ac876f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.820 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[96582c7e-b1e9-45e4-b98d-8cf62e69053a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.834 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcbf409-e743-468c-bc49-373bda54cd3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649311, 'reachable_time': 18716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297674, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost systemd[1]: run-netns-ovnmeta\x2d20d6a6dc\x2d0f38\x2d4a89\x2db3fc\x2d56befd04e92f.mount: Deactivated successfully. Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.844 163645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.845 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[59289842-85ca-4f89-a399-d8a1c5747830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.859 2 INFO nova.virt.libvirt.driver [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Instance shutdown successfully after 3 seconds.#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.865 2 INFO nova.virt.libvirt.driver [-] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Instance destroyed successfully.#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.866 2 DEBUG nova.objects.instance [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.877 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:33.878 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.893 2 DEBUG nova.compute.manager [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.976 2 DEBUG oslo_concurrency.lockutils [None req-6a1aa81d-460d-441c-8faf-81d6fa6fd298 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: held 3.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.985 2 DEBUG nova.compute.manager [req-b97b5d8e-74ca-4e31-9f35-84b7c77875f1 req-13e7412d-4b0f-486d-a36c-1048980fdb88 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received event network-vif-unplugged-4db5c636-3094-4e86-9093-8123489e64be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.986 2 DEBUG oslo_concurrency.lockutils [req-b97b5d8e-74ca-4e31-9f35-84b7c77875f1 req-13e7412d-4b0f-486d-a36c-1048980fdb88 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.987 2 DEBUG oslo_concurrency.lockutils [req-b97b5d8e-74ca-4e31-9f35-84b7c77875f1 req-13e7412d-4b0f-486d-a36c-1048980fdb88 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.987 2 DEBUG oslo_concurrency.lockutils [req-b97b5d8e-74ca-4e31-9f35-84b7c77875f1 req-13e7412d-4b0f-486d-a36c-1048980fdb88 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.988 2 DEBUG nova.compute.manager [req-b97b5d8e-74ca-4e31-9f35-84b7c77875f1 req-13e7412d-4b0f-486d-a36c-1048980fdb88 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] No waiting events found dispatching network-vif-unplugged-4db5c636-3094-4e86-9093-8123489e64be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 05:47:33 localhost nova_compute[297021]: 2025-10-05 09:47:33.988 2 WARNING nova.compute.manager [req-b97b5d8e-74ca-4e31-9f35-84b7c77875f1 req-13e7412d-4b0f-486d-a36c-1048980fdb88 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received unexpected event network-vif-unplugged-4db5c636-3094-4e86-9093-8123489e64be for instance with vm_state active and task_state powering-off.#033[00m Oct 5 05:47:35 localhost nova_compute[297021]: 2025-10-05 09:47:35.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.025 2 DEBUG nova.compute.manager [req-41f01b83-8280-4798-95e1-b270a6a18c9d req-3581c5fb-6017-4fd3-b0e8-78339fd4019e 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received event network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.026 2 DEBUG oslo_concurrency.lockutils [req-41f01b83-8280-4798-95e1-b270a6a18c9d req-3581c5fb-6017-4fd3-b0e8-78339fd4019e 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.026 2 DEBUG oslo_concurrency.lockutils [req-41f01b83-8280-4798-95e1-b270a6a18c9d req-3581c5fb-6017-4fd3-b0e8-78339fd4019e 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.027 2 DEBUG oslo_concurrency.lockutils [req-41f01b83-8280-4798-95e1-b270a6a18c9d req-3581c5fb-6017-4fd3-b0e8-78339fd4019e 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.028 2 DEBUG nova.compute.manager [req-41f01b83-8280-4798-95e1-b270a6a18c9d req-3581c5fb-6017-4fd3-b0e8-78339fd4019e 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] No waiting events found dispatching network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.028 2 WARNING nova.compute.manager [req-41f01b83-8280-4798-95e1-b270a6a18c9d req-3581c5fb-6017-4fd3-b0e8-78339fd4019e 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received unexpected event network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be for instance with vm_state stopped and task_state None.#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.811 2 DEBUG nova.compute.manager [None req-29741fe6-999d-4136-9e37-681c539a35aa 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server [None req-29741fe6-999d-4136-9e37-681c539a35aa 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server self.force_reraise() Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server raise self.value Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server self.force_reraise() Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server raise self.value Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Oct 5 05:47:36 localhost nova_compute[297021]: 2025-10-05 09:47:36.835 2 ERROR oslo_messaging.rpc.server #033[00m Oct 5 05:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:47:37 localhost podman[297676]: 2025-10-05 09:47:37.68025905 +0000 UTC m=+0.086119947 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:47:37 localhost podman[297676]: 2025-10-05 09:47:37.721341659 +0000 UTC m=+0.127202556 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:47:37 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:47:37 localhost ovn_metadata_agent[163429]: 2025-10-05 09:47:37.880 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:47:40 localhost nova_compute[297021]: 2025-10-05 09:47:40.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:47:41 localhost nova_compute[297021]: 2025-10-05 09:47:41.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:41 localhost systemd[1]: tmp-crun.tPA2PC.mount: Deactivated successfully. Oct 5 05:47:41 localhost podman[297702]: 2025-10-05 09:47:41.681321639 +0000 UTC m=+0.090220060 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm) Oct 5 05:47:41 localhost podman[297702]: 2025-10-05 09:47:41.696669397 +0000 UTC m=+0.105567838 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:47:41 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:47:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35396 DF PROTO=TCP SPT=55898 DPT=9102 SEQ=390474103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF31D9E0000000001030307) Oct 5 05:47:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35397 DF PROTO=TCP SPT=55898 DPT=9102 SEQ=390474103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF3219D0000000001030307) Oct 5 05:47:44 localhost sshd[297722]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:47:45 localhost podman[297724]: 2025-10-05 09:47:45.000105067 +0000 UTC m=+0.089095319 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.) Oct 5 05:47:45 localhost podman[297724]: 2025-10-05 09:47:45.018985882 +0000 UTC m=+0.107976134 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6) Oct 5 05:47:45 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:47:45 localhost nova_compute[297021]: 2025-10-05 09:47:45.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35398 DF PROTO=TCP SPT=55898 DPT=9102 SEQ=390474103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF3299D0000000001030307) Oct 5 05:47:46 localhost nova_compute[297021]: 2025-10-05 09:47:46.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:47:46 localhost podman[297745]: 2025-10-05 09:47:46.990816988 +0000 UTC m=+0.079400385 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:47:47 localhost podman[297745]: 2025-10-05 09:47:47.028871726 +0000 UTC m=+0.117455093 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:47:47 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:47:48 localhost nova_compute[297021]: 2025-10-05 09:47:48.573 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 05:47:48 localhost nova_compute[297021]: 2025-10-05 09:47:48.574 2 INFO nova.compute.manager [-] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] VM Stopped (Lifecycle Event)#033[00m Oct 5 05:47:48 localhost nova_compute[297021]: 2025-10-05 09:47:48.600 2 DEBUG nova.compute.manager [None req-83b62d16-75c1-4b4a-aaee-b3140dc8432d - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:48 localhost nova_compute[297021]: 2025-10-05 09:47:48.605 2 DEBUG nova.compute.manager [None req-83b62d16-75c1-4b4a-aaee-b3140dc8432d - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 05:47:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35399 DF PROTO=TCP SPT=55898 DPT=9102 SEQ=390474103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF3395E0000000001030307) Oct 5 05:47:50 localhost nova_compute[297021]: 2025-10-05 09:47:50.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:51 localhost podman[248506]: time="2025-10-05T09:47:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:47:51 localhost podman[248506]: @ - - [05/Oct/2025:09:47:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 136326 "" "Go-http-client/1.1" Oct 5 05:47:51 localhost podman[248506]: @ - - [05/Oct/2025:09:47:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16865 "" "Go-http-client/1.1" Oct 5 05:47:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:51 localhost podman[297768]: 2025-10-05 09:47:51.671413126 +0000 UTC m=+0.079833278 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:47:51 localhost podman[297768]: 2025-10-05 09:47:51.724892773 +0000 UTC m=+0.133312855 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:47:51 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.944 2 DEBUG nova.compute.manager [None req-ac43206c-919b-4b10-8af3-5588befb7ad3 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server [None req-ac43206c-919b-4b10-8af3-5588befb7ad3 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server self.force_reraise() Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server raise self.value Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server self.force_reraise() Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server raise self.value Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Oct 5 05:47:51 localhost nova_compute[297021]: 2025-10-05 09:47:51.971 2 ERROR oslo_messaging.rpc.server #033[00m Oct 5 05:47:52 localhost openstack_network_exporter[250601]: ERROR 09:47:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:47:52 localhost openstack_network_exporter[250601]: ERROR 09:47:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:52 localhost openstack_network_exporter[250601]: ERROR 09:47:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:52 localhost openstack_network_exporter[250601]: ERROR 09:47:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:47:52 localhost openstack_network_exporter[250601]: Oct 5 05:47:52 localhost openstack_network_exporter[250601]: ERROR 09:47:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:47:52 localhost openstack_network_exporter[250601]: Oct 5 05:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:47:55 localhost podman[297791]: 2025-10-05 09:47:55.651509871 +0000 UTC m=+0.064423727 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:47:55 localhost podman[297791]: 2025-10-05 09:47:55.662259833 +0000 UTC m=+0.075173679 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:47:55 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:47:55 localhost nova_compute[297021]: 2025-10-05 09:47:55.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:47:56 localhost podman[297810]: 2025-10-05 09:47:56.650360615 +0000 UTC m=+0.060540950 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:47:56 localhost nova_compute[297021]: 2025-10-05 09:47:56.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:47:56 localhost podman[297810]: 2025-10-05 09:47:56.658921919 +0000 UTC m=+0.069102304 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 5 05:47:56 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:47:57 localhost nova_compute[297021]: 2025-10-05 09:47:57.715 2 DEBUG nova.objects.instance [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'flavor' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:47:57 localhost nova_compute[297021]: 2025-10-05 09:47:57.732 2 DEBUG oslo_concurrency.lockutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:47:57 localhost nova_compute[297021]: 2025-10-05 09:47:57.732 2 DEBUG oslo_concurrency.lockutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:47:57 localhost nova_compute[297021]: 2025-10-05 09:47:57.733 2 DEBUG nova.network.neutron [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 05:47:57 localhost nova_compute[297021]: 2025-10-05 09:47:57.733 2 DEBUG nova.objects.instance [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:47:59 localhost nova_compute[297021]: 2025-10-05 09:47:59.461 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:59 localhost nova_compute[297021]: 2025-10-05 09:47:59.462 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:59 localhost nova_compute[297021]: 2025-10-05 09:47:59.463 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:47:59 localhost nova_compute[297021]: 2025-10-05 09:47:59.463 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:47:59 localhost nova_compute[297021]: 2025-10-05 09:47:59.487 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:47:59 localhost nova_compute[297021]: 2025-10-05 09:47:59.975 2 DEBUG nova.network.neutron [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.002 2 DEBUG oslo_concurrency.lockutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.004 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.004 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.005 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.036 2 INFO nova.virt.libvirt.driver [-] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Instance destroyed successfully.#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.037 2 DEBUG nova.objects.instance [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.046 2 DEBUG nova.objects.instance [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'resources' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.058 2 DEBUG nova.virt.libvirt.vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T08:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005471150.localdomain',hostname='test',id=2,image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T08:30:14Z,launched_on='np0005471150.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005471150.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='8b36437b65444bcdac75beef77b6981e',ramdisk_id='',reservation_id='r-dff44nva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-10-05T09:47:33Z,user_data=None,user_id='8d17cd5027274bc5883e2354d4ddec6b',uuid=2b20c302-a8d1-4ee0-990b-24973ca23df1,vcpu_model=,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.058 2 DEBUG nova.network.os_vif_util [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Converting VIF {"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.059 2 DEBUG nova.network.os_vif_util [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.060 2 DEBUG os_vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4db5c636-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.113 2 INFO os_vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30')#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.115 2 DEBUG nova.virt.libvirt.host [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.115 2 INFO nova.virt.libvirt.host [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] UEFI support detected#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.128 2 DEBUG nova.virt.libvirt.driver [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Start _get_guest_xml network_info=[{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=e521096d-c3e6-4c8e-9ba6-a35f9a80b219,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'image_id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}], 'ephemerals': [{'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.133 2 WARNING nova.virt.libvirt.driver [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.136 2 DEBUG nova.virt.libvirt.host [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Searching host: 'np0005471150.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.136 2 DEBUG nova.virt.libvirt.host [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.138 2 DEBUG nova.virt.libvirt.host [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Searching host: 'np0005471150.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.139 2 DEBUG nova.virt.libvirt.host [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.139 2 DEBUG nova.virt.libvirt.driver [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.140 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-05T08:29:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='76acf371-9e6c-4c5c-aec4-748e712efe27',id=2,is_public=True,memory_mb=512,name='m1.small',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=e521096d-c3e6-4c8e-9ba6-a35f9a80b219,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.141 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.141 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.141 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.142 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.142 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.143 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.143 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.143 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.144 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.144 2 DEBUG nova.virt.hardware [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.145 2 DEBUG nova.objects.instance [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.163 2 DEBUG nova.privsep.utils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.163 2 DEBUG oslo_concurrency.processutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.638 2 DEBUG oslo_concurrency.processutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.640 2 DEBUG oslo_concurrency.processutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.920 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.943 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.943 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.944 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.945 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.945 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.946 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.946 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.947 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.947 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.947 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.962 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.963 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.963 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.964 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:48:00 localhost nova_compute[297021]: 2025-10-05 09:48:00.964 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.084 2 DEBUG oslo_concurrency.processutils [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.087 2 DEBUG nova.virt.libvirt.vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T08:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005471150.localdomain',hostname='test',id=2,image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T08:30:14Z,launched_on='np0005471150.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005471150.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='8b36437b65444bcdac75beef77b6981e',ramdisk_id='',reservation_id='r-dff44nva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-10-05T09:47:33Z,user_data=None,user_id='8d17cd5027274bc5883e2354d4ddec6b',uuid=2b20c302-a8d1-4ee0-990b-24973ca23df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.087 2 DEBUG nova.network.os_vif_util [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Converting VIF {"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.088 2 DEBUG nova.network.os_vif_util [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.091 2 DEBUG nova.objects.instance [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.111 2 DEBUG nova.virt.libvirt.driver [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] End _get_guest_xml xml= Oct 5 05:48:01 localhost nova_compute[297021]: 2b20c302-a8d1-4ee0-990b-24973ca23df1 Oct 5 05:48:01 localhost nova_compute[297021]: instance-00000002 Oct 5 05:48:01 localhost nova_compute[297021]: 524288 Oct 5 05:48:01 localhost nova_compute[297021]: 1 Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: test Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:00 Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: 512 Oct 5 05:48:01 localhost nova_compute[297021]: 1 Oct 5 05:48:01 localhost nova_compute[297021]: 0 Oct 5 05:48:01 localhost nova_compute[297021]: 1 Oct 5 05:48:01 localhost nova_compute[297021]: 1 Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: admin Oct 5 05:48:01 localhost nova_compute[297021]: admin Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: RDO Oct 5 05:48:01 localhost nova_compute[297021]: OpenStack Compute Oct 5 05:48:01 localhost nova_compute[297021]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 5 05:48:01 localhost nova_compute[297021]: 2b20c302-a8d1-4ee0-990b-24973ca23df1 Oct 5 05:48:01 localhost nova_compute[297021]: 2b20c302-a8d1-4ee0-990b-24973ca23df1 Oct 5 05:48:01 localhost nova_compute[297021]: Virtual Machine Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: hvm Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: /dev/urandom Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: Oct 5 05:48:01 localhost nova_compute[297021]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.113 2 DEBUG nova.virt.libvirt.driver [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.113 2 DEBUG nova.virt.libvirt.driver [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.115 2 DEBUG nova.virt.libvirt.vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T08:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005471150.localdomain',hostname='test',id=2,image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T08:30:14Z,launched_on='np0005471150.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005471150.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=,power_state=4,progress=0,project_id='8b36437b65444bcdac75beef77b6981e',ramdisk_id='',reservation_id='r-dff44nva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='e521096d-c3e6-4c8e-9ba6-a35f9a80b219',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-10-05T09:47:33Z,user_data=None,user_id='8d17cd5027274bc5883e2354d4ddec6b',uuid=2b20c302-a8d1-4ee0-990b-24973ca23df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.115 2 DEBUG nova.network.os_vif_util [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Converting VIF {"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.116 2 DEBUG nova.network.os_vif_util [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.117 2 DEBUG os_vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4db5c636-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.125 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4db5c636-30, col_values=(('external_ids', {'iface-id': '4db5c636-3094-4e86-9093-8123489e64be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:2c:a3', 'vm-uuid': '2b20c302-a8d1-4ee0-990b-24973ca23df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.182 2 INFO os_vif [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:2c:a3,bridge_name='br-int',has_traffic_filtering=True,id=4db5c636-3094-4e86-9093-8123489e64be,network=Network(20d6a6dc-0f38-4a89-b3fc-56befd04e92f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db5c636-30')#033[00m Oct 5 05:48:01 localhost systemd[1]: Starting libvirt secret daemon... Oct 5 05:48:01 localhost systemd[1]: Started libvirt secret daemon. Oct 5 05:48:01 localhost NetworkManager[5981]: [1759657681.2866] manager: (tap4db5c636-30): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Oct 5 05:48:01 localhost kernel: device tap4db5c636-30 entered promiscuous mode Oct 5 05:48:01 localhost systemd-udevd[297921]: Network interface NamePolicy= disabled on kernel command line. Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00055|binding|INFO|Claiming lport 4db5c636-3094-4e86-9093-8123489e64be for this chassis. Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00056|binding|INFO|4db5c636-3094-4e86-9093-8123489e64be: Claiming fa:16:3e:a6:2c:a3 192.168.0.56 Oct 5 05:48:01 localhost NetworkManager[5981]: [1759657681.3055] device (tap4db5c636-30): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 05:48:01 localhost NetworkManager[5981]: [1759657681.3061] device (tap4db5c636-30): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00057|ovn_bfd|INFO|Enabled BFD on interface ovn-fe3fe5-0 Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00058|ovn_bfd|INFO|Enabled BFD on interface ovn-891f35-0 Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00059|ovn_bfd|INFO|Enabled BFD on interface ovn-85ea67-0 Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.310 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:2c:a3 192.168.0.56'], port_security=['fa:16:3e:a6:2c:a3 192.168.0.56'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.56/24', 'neutron:device_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4fbe78ed-92dd-4e52-8c97-e662f3cb3af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f49a96c-a4ec-4b07-9e41-306ef014a4cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4db5c636-3094-4e86-9093-8123489e64be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.312 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 4db5c636-3094-4e86-9093-8123489e64be in datapath 20d6a6dc-0f38-4a89-b3fc-56befd04e92f bound to our chassis#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.315 163434 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20d6a6dc-0f38-4a89-b3fc-56befd04e92f#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.326 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[9547ea23-4a47-4bab-95fa-d61168049597]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.327 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20d6a6dc-01 in ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.329 163567 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20d6a6dc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.329 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdc8bc5-5a63-47ad-ad9f-b2802ed430c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.332 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[088c3844-a073-4c3e-9dd2-4f9448c94e4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.348 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[361a272e-4352-4766-afad-4da1035a8415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.360 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ad735a45-4795-436a-8480-f156d8db574c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00060|binding|INFO|Setting lport 4db5c636-3094-4e86-9093-8123489e64be ovn-installed in OVS Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00061|binding|INFO|Setting lport 4db5c636-3094-4e86-9093-8123489e64be up in Southbound Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost systemd-machined[84982]: New machine qemu-2-instance-00000002. Oct 5 05:48:01 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000002. Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.385 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[4b189cd1-2a77-4a33-a303-2b8a7ccbe8a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.393 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[98cfb908-95dd-45c1-a86c-26763d6ace7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost NetworkManager[5981]: [1759657681.3940] manager: (tap20d6a6dc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Oct 5 05:48:01 localhost systemd-udevd[297923]: Network interface NamePolicy= disabled on kernel command line. Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.425 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[0fad5192-4c37-4772-b9da-2f94194a9c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.429 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[c88b7594-f2af-4f3b-9c59-8903f8148ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap20d6a6dc-01: link becomes ready Oct 5 05:48:01 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap20d6a6dc-00: link becomes ready Oct 5 05:48:01 localhost NetworkManager[5981]: [1759657681.4428] device (tap20d6a6dc-00): carrier: link connected Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.445 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[e951e696-0262-47b8-a121-ca04151eb332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.464 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0c3be4-235f-4596-a6f2-d2272cf4b587]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20d6a6dc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1116059, 'reachable_time': 18227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297960, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.465 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.485 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[80991339-49a0-471d-9e40-b913fe739e28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:95ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1116059, 'tstamp': 1116059}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297963, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.499 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac50bd0-b08c-4b86-bb37-9963c441284b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20d6a6dc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1116059, 'reachable_time': 18227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297964, 'error': None, 'target': 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.522 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.523 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.536 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d151bcf8-60bd-48c1-82f7-dc34f071d7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.596 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d1512833-7db3-4802-9445-8fce5a57ca52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.598 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20d6a6dc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.599 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.600 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20d6a6dc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:01 localhost kernel: device tap20d6a6dc-00 entered promiscuous mode Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.611 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20d6a6dc-00, col_values=(('external_ids', {'iface-id': 'cd4e79ca-7111-4d41-b9b0-672ba46474d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:01 localhost ovn_controller[157794]: 2025-10-05T09:48:01Z|00062|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.615 163434 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.616 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a1eb415b-9ace-4a52-b92a-eaf96e527c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.618 163434 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: global Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: log /dev/log local0 debug Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: log-tag haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: user root Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: group root Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: maxconn 1024 Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: pidfile /var/lib/neutron/external/pids/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.pid.haproxy Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: daemon Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: defaults Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: log global Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: mode http Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: option httplog Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: option dontlognull Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: option http-server-close Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: option forwardfor Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: retries 3 Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: timeout http-request 30s Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: timeout connect 30s Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: timeout client 32s Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: timeout server 32s Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: timeout http-keep-alive 30s Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: listen listener Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: bind 169.254.169.254:80 Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: server metadata /var/lib/neutron/metadata_proxy Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: http-request add-header X-OVN-Network-ID 20d6a6dc-0f38-4a89-b3fc-56befd04e92f Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 05:48:01 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:01.619 163434 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'env', 'PROCESS_TAG=haproxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20d6a6dc-0f38-4a89-b3fc-56befd04e92f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.752 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.754 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12503MB free_disk=41.8370475769043GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.754 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.755 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.886 2 DEBUG nova.compute.manager [req-9c723f30-e647-4710-9454-3759c2b2280a req-4f47da37-b264-425f-9709-f7d6f699d7a4 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received event network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.886 2 DEBUG oslo_concurrency.lockutils [req-9c723f30-e647-4710-9454-3759c2b2280a req-4f47da37-b264-425f-9709-f7d6f699d7a4 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.886 2 DEBUG oslo_concurrency.lockutils [req-9c723f30-e647-4710-9454-3759c2b2280a req-4f47da37-b264-425f-9709-f7d6f699d7a4 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.887 2 DEBUG oslo_concurrency.lockutils [req-9c723f30-e647-4710-9454-3759c2b2280a req-4f47da37-b264-425f-9709-f7d6f699d7a4 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.887 2 DEBUG nova.compute.manager [req-9c723f30-e647-4710-9454-3759c2b2280a req-4f47da37-b264-425f-9709-f7d6f699d7a4 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] No waiting events found dispatching network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.887 2 WARNING nova.compute.manager [req-9c723f30-e647-4710-9454-3759c2b2280a req-4f47da37-b264-425f-9709-f7d6f699d7a4 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received unexpected event network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be for instance with vm_state stopped and task_state powering-on.#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.957 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.957 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.958 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:48:01 localhost nova_compute[297021]: 2025-10-05 09:48:01.988 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:48:02 localhost podman[298039]: Oct 5 05:48:02 localhost podman[298039]: 2025-10-05 09:48:02.022437182 +0000 UTC m=+0.092060521 container create 7a5ceaa70e98f1ff6d4db4ddda22113f22256c690071d94cb7d7a0cd1259aa57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:48:02 localhost systemd[1]: Started libpod-conmon-7a5ceaa70e98f1ff6d4db4ddda22113f22256c690071d94cb7d7a0cd1259aa57.scope. Oct 5 05:48:02 localhost podman[298039]: 2025-10-05 09:48:01.975724538 +0000 UTC m=+0.045347927 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:48:02 localhost systemd[1]: Started libcrun container. Oct 5 05:48:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212f8324d50f4dc5aabb58187aac9b06496ed090db75fbe444ca395a28280509/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:48:02 localhost podman[298039]: 2025-10-05 09:48:02.102665918 +0000 UTC m=+0.172289267 container init 7a5ceaa70e98f1ff6d4db4ddda22113f22256c690071d94cb7d7a0cd1259aa57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 5 05:48:02 localhost podman[298039]: 2025-10-05 09:48:02.114769048 +0000 UTC m=+0.184392397 container start 7a5ceaa70e98f1ff6d4db4ddda22113f22256c690071d94cb7d7a0cd1259aa57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:48:02 localhost neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298054]: [NOTICE] (298068) : New worker (298079) forked Oct 5 05:48:02 localhost neutron-haproxy-ovnmeta-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298054]: [NOTICE] (298068) : Loading success. Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.148 2 DEBUG nova.compute.manager [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.149 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.150 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] VM Resumed (Lifecycle Event)#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.181 2 INFO nova.virt.libvirt.driver [-] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Instance rebooted successfully.#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.182 2 DEBUG nova.compute.manager [None req-e26152d2-e978-495b-986f-055cb3f54638 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.268 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:48:02 localhost ovn_controller[157794]: 2025-10-05T09:48:02Z|00063|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.295 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 05:48:02 localhost ovn_controller[157794]: 2025-10-05T09:48:02Z|00064|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.355 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.356 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] VM Started (Lifecycle Event)#033[00m Oct 5 05:48:02 localhost ovn_controller[157794]: 2025-10-05T09:48:02Z|00065|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.427 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.432 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.453 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.458 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.591 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.614 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:48:02 localhost nova_compute[297021]: 2025-10-05 09:48:02.615 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:03 localhost ovn_controller[157794]: 2025-10-05T09:48:03Z|00066|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:03 localhost snmpd[68045]: IfIndex of an interface changed. Such interfaces will appear multiple times in IF-MIB. Oct 5 05:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:48:03 localhost systemd[1]: tmp-crun.gAXxaw.mount: Deactivated successfully. Oct 5 05:48:03 localhost podman[298090]: 2025-10-05 09:48:03.72985248 +0000 UTC m=+0.125928083 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 05:48:03 localhost podman[298090]: 2025-10-05 09:48:03.744452769 +0000 UTC m=+0.140528382 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:48:03 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.945 2 DEBUG nova.compute.manager [req-4cd16759-b76d-4305-9264-2773f47b0ce5 req-239ddc56-0fb5-490e-86e9-0fda7e0a0a26 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received event network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.946 2 DEBUG oslo_concurrency.lockutils [req-4cd16759-b76d-4305-9264-2773f47b0ce5 req-239ddc56-0fb5-490e-86e9-0fda7e0a0a26 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.947 2 DEBUG oslo_concurrency.lockutils [req-4cd16759-b76d-4305-9264-2773f47b0ce5 req-239ddc56-0fb5-490e-86e9-0fda7e0a0a26 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.948 2 DEBUG oslo_concurrency.lockutils [req-4cd16759-b76d-4305-9264-2773f47b0ce5 req-239ddc56-0fb5-490e-86e9-0fda7e0a0a26 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.948 2 DEBUG nova.compute.manager [req-4cd16759-b76d-4305-9264-2773f47b0ce5 req-239ddc56-0fb5-490e-86e9-0fda7e0a0a26 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] No waiting events found dispatching network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 05:48:03 localhost nova_compute[297021]: 2025-10-05 09:48:03.949 2 WARNING nova.compute.manager [req-4cd16759-b76d-4305-9264-2773f47b0ce5 req-239ddc56-0fb5-490e-86e9-0fda7e0a0a26 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Received unexpected event network-vif-plugged-4db5c636-3094-4e86-9093-8123489e64be for instance with vm_state active and task_state None.#033[00m Oct 5 05:48:04 localhost ovn_controller[157794]: 2025-10-05T09:48:04Z|00067|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 05:48:04 localhost nova_compute[297021]: 2025-10-05 09:48:04.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:06 localhost nova_compute[297021]: 2025-10-05 09:48:06.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:06 localhost nova_compute[297021]: 2025-10-05 09:48:06.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:48:08 localhost podman[298108]: 2025-10-05 09:48:08.671465741 +0000 UTC m=+0.075964571 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller) Oct 5 05:48:08 localhost podman[298108]: 2025-10-05 09:48:08.704129812 +0000 UTC m=+0.108628702 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:48:08 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:48:11 localhost nova_compute[297021]: 2025-10-05 09:48:11.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:11 localhost nova_compute[297021]: 2025-10-05 09:48:11.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:48:12 localhost systemd[1]: tmp-crun.8XhnaM.mount: Deactivated successfully. Oct 5 05:48:12 localhost podman[298133]: 2025-10-05 09:48:12.687510227 +0000 UTC m=+0.098556447 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 05:48:12 localhost podman[298133]: 2025-10-05 09:48:12.72177542 +0000 UTC m=+0.132821600 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:48:12 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:48:13 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Oct 5 05:48:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46414 DF PROTO=TCP SPT=42204 DPT=9102 SEQ=3690776982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF392CE0000000001030307) Oct 5 05:48:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46415 DF PROTO=TCP SPT=42204 DPT=9102 SEQ=3690776982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF396DD0000000001030307) Oct 5 05:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:48:15 localhost nova_compute[297021]: 2025-10-05 09:48:15.608 2 DEBUG nova.compute.manager [None req-0b4a74da-8395-473e-8859-657b75d95b73 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 05:48:15 localhost nova_compute[297021]: 2025-10-05 09:48:15.625 2 INFO nova.compute.manager [None req-0b4a74da-8395-473e-8859-657b75d95b73 8d17cd5027274bc5883e2354d4ddec6b 8b36437b65444bcdac75beef77b6981e - - default default] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Retrieving diagnostics#033[00m Oct 5 05:48:15 localhost podman[298152]: 2025-10-05 09:48:15.683057057 +0000 UTC m=+0.092501602 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 05:48:15 localhost ovn_controller[157794]: 2025-10-05T09:48:15Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:2c:a3 192.168.0.56 Oct 5 05:48:15 localhost podman[298152]: 2025-10-05 09:48:15.725875245 +0000 UTC m=+0.135319820 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:48:15 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:48:16 localhost nova_compute[297021]: 2025-10-05 09:48:16.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46416 DF PROTO=TCP SPT=42204 DPT=9102 SEQ=3690776982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF39EDD0000000001030307) Oct 5 05:48:16 localhost nova_compute[297021]: 2025-10-05 09:48:16.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:48:17 localhost systemd[1]: tmp-crun.Ej4YvL.mount: Deactivated successfully. Oct 5 05:48:17 localhost podman[298172]: 2025-10-05 09:48:17.676003009 +0000 UTC m=+0.077978246 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:17 localhost podman[298172]: 2025-10-05 09:48:17.686956908 +0000 UTC m=+0.088932165 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:48:17 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:48:18 localhost nova_compute[297021]: 2025-10-05 09:48:18.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:18 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:18.486 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:48:18 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:18.487 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:20.448 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:20.449 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:20.450 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46417 DF PROTO=TCP SPT=42204 DPT=9102 SEQ=3690776982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF3AE9D0000000001030307) Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:20.715 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:20.720 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:20 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:21 localhost nova_compute[297021]: 2025-10-05 09:48:21.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:21 localhost podman[248506]: time="2025-10-05T09:48:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:48:21 localhost podman[248506]: @ - - [05/Oct/2025:09:48:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 137513 "" "Go-http-client/1.1" Oct 5 05:48:21 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:21.489 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:21 localhost podman[248506]: @ - - [05/Oct/2025:09:48:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17342 "" "Go-http-client/1.1" Oct 5 05:48:21 localhost nova_compute[297021]: 2025-10-05 09:48:21.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:22 localhost openstack_network_exporter[250601]: ERROR 09:48:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:22 localhost openstack_network_exporter[250601]: ERROR 09:48:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:48:22 localhost openstack_network_exporter[250601]: ERROR 09:48:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:22 localhost openstack_network_exporter[250601]: ERROR 09:48:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:48:22 localhost openstack_network_exporter[250601]: Oct 5 05:48:22 localhost openstack_network_exporter[250601]: ERROR 09:48:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:48:22 localhost openstack_network_exporter[250601]: Oct 5 05:48:22 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41572 [05/Oct/2025:09:48:20.714] listener listener/metadata 0/0/0/1483/1483 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:22.196 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:22.197 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 1.4776671#033[00m Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:22.212 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:22.213 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-keys HTTP/1.0#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:22 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:48:22 localhost podman[298196]: 2025-10-05 09:48:22.676964689 +0000 UTC m=+0.078343347 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:48:22 localhost podman[298196]: 2025-10-05 09:48:22.688107982 +0000 UTC m=+0.089486660 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:48:22 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:48:24 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41578 [05/Oct/2025:09:48:22.212] listener listener/metadata 0/0/0/2764/2764 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Oct 5 05:48:24 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:24.976 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 404 len: 297 time: 2.7627590#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:24.997 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:24.998 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.134 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.134 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 0.1365047#033[00m Oct 5 05:48:25 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41582 [05/Oct/2025:09:48:24.996] listener listener/metadata 0/0/0/137/137 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.142 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.143 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/ami-launch-index HTTP/1.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.279 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.280 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" status: 200 len: 136 time: 0.1375210#033[00m Oct 5 05:48:25 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41594 [05/Oct/2025:09:48:25.141] listener listener/metadata 0/0/0/138/138 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.287 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.288 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-type HTTP/1.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.545 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.546 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200 len: 143 time: 0.2578011#033[00m Oct 5 05:48:25 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41606 [05/Oct/2025:09:48:25.287] listener listener/metadata 0/0/0/259/259 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.554 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:25.554 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-ipv4 HTTP/1.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:25 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.181 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.181 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" status: 200 len: 148 time: 0.6271522#033[00m Oct 5 05:48:26 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41620 [05/Oct/2025:09:48:25.553] listener listener/metadata 0/0/0/628/628 200 132 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Oct 5 05:48:26 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Oct 5 05:48:26 localhost nova_compute[297021]: 2025-10-05 09:48:26.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.236 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.237 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-ipv4 HTTP/1.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.449 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.450 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 150 time: 0.2132921#033[00m Oct 5 05:48:26 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41628 [05/Oct/2025:09:48:26.235] listener listener/metadata 0/0/0/214/214 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.459 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.459 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/hostname HTTP/1.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.596 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.596 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/hostname HTTP/1.1" status: 200 len: 139 time: 0.1369479#033[00m Oct 5 05:48:26 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41632 [05/Oct/2025:09:48:26.458] listener listener/metadata 0/0/0/137/137 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.604 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.604 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-hostname HTTP/1.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:26 localhost nova_compute[297021]: 2025-10-05 09:48:26.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:48:26 localhost podman[298220]: 2025-10-05 09:48:26.692672233 +0000 UTC m=+0.093682214 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 05:48:26 localhost podman[298220]: 2025-10-05 09:48:26.70981346 +0000 UTC m=+0.110823401 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:48:26 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.742 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:26 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41642 [05/Oct/2025:09:48:26.603] listener listener/metadata 0/0/0/139/139 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.742 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" status: 200 len: 139 time: 0.1379740#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.750 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:26.751 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/user-data HTTP/1.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:26 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:26 localhost podman[298238]: 2025-10-05 09:48:26.788522156 +0000 UTC m=+0.085914473 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 05:48:26 localhost podman[298238]: 2025-10-05 09:48:26.802997041 +0000 UTC m=+0.100389308 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:48:26 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:48:27 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41646 [05/Oct/2025:09:48:26.750] listener listener/metadata 0/0/0/526/526 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.277 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/user-data HTTP/1.1" status: 404 len: 297 time: 0.5256355#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.296 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.297 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping HTTP/1.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.444 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.446 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" status: 200 len: 155 time: 0.1486006#033[00m Oct 5 05:48:27 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:41656 [05/Oct/2025:09:48:27.295] listener listener/metadata 0/0/0/150/150 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.452 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.453 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.588 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.589 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200 len: 138 time: 0.1356690#033[00m Oct 5 05:48:27 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:55232 [05/Oct/2025:09:48:27.451] listener listener/metadata 0/0/0/137/137 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.596 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.596 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.735 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:27 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:55248 [05/Oct/2025:09:48:27.595] listener listener/metadata 0/0/0/140/140 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.736 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" status: 200 len: 143 time: 0.1394286#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.742 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:27.742 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:27 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.633 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.634 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200 len: 143 time: 0.8917406#033[00m Oct 5 05:48:28 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:55254 [05/Oct/2025:09:48:27.741] listener listener/metadata 0/0/0/892/892 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.645 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.646 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-hostname HTTP/1.0#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.798 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:28 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:55266 [05/Oct/2025:09:48:28.644] listener listener/metadata 0/0/0/154/154 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.799 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" status: 200 len: 139 time: 0.1526389#033[00m Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.806 163532 DEBUG eventlet.wsgi.server [-] (163532) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:28.807 163532 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.0#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Accept: */*#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Connection: close#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Content-Type: text/plain#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: Host: 169.254.169.254#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: User-Agent: curl/7.84.0#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: X-Forwarded-For: 192.168.0.56#015 Oct 5 05:48:28 localhost ovn_metadata_agent[163429]: X-Ovn-Network-Id: 20d6a6dc-0f38-4a89-b3fc-56befd04e92f __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 05:48:29 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:29.691 163532 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 05:48:29 localhost ovn_metadata_agent[163429]: 2025-10-05 09:48:29.692 163532 INFO eventlet.wsgi.server [-] 192.168.0.56, "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 139 time: 0.8846099#033[00m Oct 5 05:48:29 localhost haproxy-metadata-proxy-20d6a6dc-0f38-4a89-b3fc-56befd04e92f[298079]: 192.168.0.56:55268 [05/Oct/2025:09:48:28.806] listener listener/metadata 0/0/0/885/885 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Oct 5 05:48:31 localhost nova_compute[297021]: 2025-10-05 09:48:31.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:31 localhost ovn_controller[157794]: 2025-10-05T09:48:31Z|00068|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory Oct 5 05:48:31 localhost nova_compute[297021]: 2025-10-05 09:48:31.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:32 localhost snmpd[68045]: empty variable list in _query Oct 5 05:48:32 localhost snmpd[68045]: empty variable list in _query Oct 5 05:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:48:34 localhost podman[298343]: 2025-10-05 09:48:34.684525084 +0000 UTC m=+0.088195724 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Oct 5 05:48:34 localhost podman[298343]: 2025-10-05 09:48:34.694815985 +0000 UTC m=+0.098486645 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:48:34 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:48:36 localhost nova_compute[297021]: 2025-10-05 09:48:36.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:36 localhost nova_compute[297021]: 2025-10-05 09:48:36.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.834 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.835 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.857 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 37 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.858 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b096639-754e-48d8-bc23-103b4991a0a0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 37, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.835954', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7817c7da-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'e42658695b990aec2487ad15dbc53de79f6b33470fe35880938da564f1a1a7a6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.835954', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7817e1ca-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': '8cc9407a129f782936a963f0838cfdec01c8f778c65604527e70073b6d2467b3'}]}, 'timestamp': '2025-10-05 09:48:38.858952', '_unique_id': 'bc5e8374c40b4520ace8c390173b8073'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.860 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.861 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.865 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d5924af-cc09-4748-86c3-dac1d91921a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.861926', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '7818ecb4-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': 'dd97b55fee8c0c2b60832c94f9d2a472b8dcefef945168ea9cddc17274e13a32'}]}, 'timestamp': '2025-10-05 09:48:38.865670', '_unique_id': '85c00c181af3412083a462e767338f08'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.866 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.867 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.867 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.891 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 11160000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '106dba4b-d9b9-46df-a63e-25493208c442', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 11160000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:48:38.868037', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '781cfaf2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.115540171, 'message_signature': '7b2274ffe4440b48c0e93fd8839b12bd58ed2490b21788260f9990fda47f1f8e'}]}, 'timestamp': '2025-10-05 09:48:38.892237', '_unique_id': '5486411724334ad887b19fdc2d6a99e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.894 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0139b2c-eaeb-4337-9dca-c04ac828d489', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.894586', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '781f329a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.118494471, 'message_signature': '04db51fc1767f9c4e74f28def12b8d330cfb78f9ea1e16ceb7cfd62f94538a06'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.894586', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '781f4488-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.118494471, 'message_signature': 'bf1fd7a86da6a49bf351e56d2fce47509ef30ae350f2ce16fedbd99cbb919792'}]}, 'timestamp': '2025-10-05 09:48:38.907201', '_unique_id': '871597b9443f4a5586dbdb01ace4563a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.909 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7706acd6-9d16-4f6d-9629-bb879815a84e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.909485', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '781faee6-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '1593216efd3d83a57eb863131b1d666bcb1d34045104c326c415d650d2e6385f'}]}, 'timestamp': '2025-10-05 09:48:38.909951', '_unique_id': '46f92f9829964e24a6fd932488829db5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.911 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f45bc086-cc74-48e0-a734-79887c83c1b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.912076', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '782013ae-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '38eb8ca824e22f81b405f29e617ebe2f0a6c5a1074df406a8ee3f8bad16a32dc'}]}, 'timestamp': '2025-10-05 09:48:38.912564', '_unique_id': 'f32df9a187e34c1487a776b0569b4441'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.914 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.914 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 372736 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c454809a-1e09-4f65-a03a-44d1dff3f0da', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 372736, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.914647', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '782077fe-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'cb29bbef9a831092a48ec24f0730082e51fb1b653e5fee13d1011e2605adb2e1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.914647', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '782087da-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': '2897dc62a3604ea1920fb041c85a2a4eb0f3412ff06ec1f138b5cbaf41f76370'}]}, 'timestamp': '2025-10-05 09:48:38.915541', '_unique_id': 'ccf0fc1c799b49d292d304cee917449c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.917 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd9bd2015-0f09-482f-b5e1-a5c73caffa91', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.917667', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '7820ee1e-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '6888e44e0edf9bcc9a7eaa7c64eba7c91eeba1573094745971d9eac305276a8f'}]}, 'timestamp': '2025-10-05 09:48:38.918122', '_unique_id': 'd0d79955bb934ca3ab0b8382199d3fce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.920 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.920 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '85caa8bc-38f4-4b71-880b-bb92ea51ead8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.920357', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '782158fe-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '269a77a4ccc7a5afc4026a10805453caa9c33ea98edac25af04b6c42cbc1c343'}]}, 'timestamp': '2025-10-05 09:48:38.920856', '_unique_id': '321d180b452f483eb3e39c7b9e229304'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.922 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.922 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2191d7b3-82d4-4c5e-9943-99da349baa29', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.922919', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7821bb00-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': '84bbe1987c83a8b958c756883ab535a7499522282878ec77abdfa36751f1b4f2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.922919', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7821cdc0-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'eb698f3a762c1aa19363646c6c571b5daa86c4b10c73624376491c50e4eac668'}]}, 'timestamp': '2025-10-05 09:48:38.923823', '_unique_id': '7117b85e64184e61a3e23f4318a26c2d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.925 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.925 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f364023e-1ff5-4a0a-8b8b-008a0b7c8007', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.925943', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '78223148-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '0b81f22f1a6f8fed09ca5b5ca8a94336963fd3bb3c37e2035efcf1d9f504bf12'}]}, 'timestamp': '2025-10-05 09:48:38.926424', '_unique_id': '08f617406d5247c7979f3720ad65413b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.928 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.928 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4c785ed9-af6e-477b-abdf-224b6981b427', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.928476', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '78229106-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '3e42703386097951abac80315403e939cff4e6609f63bb2fdbdd5eaa177f3674'}]}, 'timestamp': '2025-10-05 09:48:38.928759', '_unique_id': '6a29703be53e4008a402e3c3b6962183'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.929 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e6b183c3-313f-4c4c-8f6e-2d0aec7b1efb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.930021', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '7822cd4c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': 'cf30d87b974440b7f6ef1467fd0f37ecf4dd0a81bce796cefaac591533f77f5e'}]}, 'timestamp': '2025-10-05 09:48:38.930301', '_unique_id': 'e0b7ee8c3329405f864a32dcbf2d2509'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.931 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.931 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.63671875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '17fb49a8-f244-407e-878d-75526c330405', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.63671875, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:48:38.931702', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '78230ece-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.115540171, 'message_signature': 'afc54ff8f63767e142408afe22f61c3620069a904253401cf66751c24c77812a'}]}, 'timestamp': '2025-10-05 09:48:38.931967', '_unique_id': '2efc0e6ea3bc425ea41bfd491accf856'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.933 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.933 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.933 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dff850cf-4d0f-4528-80a2-7b3ede71e13b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.933225', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '78234a4c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.118494471, 'message_signature': '53a2a2428cdb23069544ea9ca68661fbb546b9b3d8d135c18bd5019f386c7b84'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.933225', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '782354e2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.118494471, 'message_signature': '30c1755818a5b1d6c19bf02ae67d3b6a3861efa990c539a1389d13f7850d0997'}]}, 'timestamp': '2025-10-05 09:48:38.933754', '_unique_id': 'c19186e54be44efdb100cf8b4c3eaef0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.934 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '104376e5-b8ae-49c7-99a1-1495ef7ebd1a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.935054', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '782391d2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'b7ab557f382c46d5d298fdab43241da38af90acdaf9cc9e1e707998ed472291f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.935054', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '78239e2a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'ad6c7c5dfe7084b0f953121c0a43268ee529a0fb06eb9a82c69262c5dc9b66ef'}]}, 'timestamp': '2025-10-05 09:48:38.935646', '_unique_id': 'c84fae1cb2b547b59c45e270681b0dc6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1089643021 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e29ed6bf-2fa7-48aa-be25-1db70662dbf9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1089643021, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.937047', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7823df84-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'fb14fffb18f3eaacfec028cebf10fb01696e22e7c862def06cad22ddfa5d79f5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.937047', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7823ea2e-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': 'cd37ace5540f34be1465720d1a45d423f4469f31cda73cd7d1df568e555d9200'}]}, 'timestamp': '2025-10-05 09:48:38.937577', '_unique_id': '084032fcfcdb458b95d521623a7adbcf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f61693a5-50a3-4262-abce-16e2ac68cfd4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.938894', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '782427f0-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': '276eac7d5e5dde89e230a5955fda54355a84a5f511a9ad5879c540b6e3746bca'}]}, 'timestamp': '2025-10-05 09:48:38.939176', '_unique_id': '616f97d8a4f043fb89108dd165998fa0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.940 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a618f0ad-8257-4464-b239-aef9e013370b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:48:38.940651', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '78246c7e-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.085838131, 'message_signature': 'afe0b019ed86c47d165b47214cd2a6373aef6154f89b7658530b2035b9c6711a'}]}, 'timestamp': '2025-10-05 09:48:38.940930', '_unique_id': 'cfa501a753414a6bb720cdfb870c3970'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.942 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '404dc71a-e41a-4496-a19a-bcbaad297a1a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.942223', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7824a9d2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.118494471, 'message_signature': 'f1aaf4cb666bcd53cfb8572acc29ef19df1d6a7792d668b928012f9e435e1bb0'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.942223', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7824b4e0-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.118494471, 'message_signature': '6166419f0e074a7a864ff0af778a34c92288e6142652208ca78dd6e6ccbc3abc'}]}, 'timestamp': '2025-10-05 09:48:38.942766', '_unique_id': '0c4ae2273a6e4562b1d2a397cc7b017e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.944 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd30497b9-e91f-44e2-86fc-6dad129c8169', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:48:38.944083', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7824f284-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': '99d5cbf138fbeb1e1d7d96843e3e4390814937ea365222f65afde1a03613bfd1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:48:38.944083', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7824fd6a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11198.059876853, 'message_signature': '68c0b7ae6b1a244b554489b61f71010eaff652e0107a957c0525bc9becc0ccc5'}]}, 'timestamp': '2025-10-05 09:48:38.944623', '_unique_id': '4796d500f5bd46fda838b138811fe5dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:48:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:48:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:48:39 localhost podman[298361]: 2025-10-05 09:48:39.672967386 +0000 UTC m=+0.080592468 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:48:39 localhost podman[298361]: 2025-10-05 09:48:39.754526449 +0000 UTC m=+0.162151551 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:48:39 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:48:41 localhost nova_compute[297021]: 2025-10-05 09:48:41.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:41 localhost nova_compute[297021]: 2025-10-05 09:48:41.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21162 DF PROTO=TCP SPT=36038 DPT=9102 SEQ=3889988424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF407FD0000000001030307) Oct 5 05:48:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:48:43 localhost systemd[1]: tmp-crun.diUb4o.mount: Deactivated successfully. Oct 5 05:48:43 localhost podman[298387]: 2025-10-05 09:48:43.686876094 +0000 UTC m=+0.093179012 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm) Oct 5 05:48:43 localhost podman[298387]: 2025-10-05 09:48:43.696525466 +0000 UTC m=+0.102828404 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:48:43 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:48:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21163 DF PROTO=TCP SPT=36038 DPT=9102 SEQ=3889988424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF40C1E0000000001030307) Oct 5 05:48:46 localhost nova_compute[297021]: 2025-10-05 09:48:46.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21164 DF PROTO=TCP SPT=36038 DPT=9102 SEQ=3889988424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF4142D0000000001030307) Oct 5 05:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:48:46 localhost nova_compute[297021]: 2025-10-05 09:48:46.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:46 localhost podman[298406]: 2025-10-05 09:48:46.689670061 +0000 UTC m=+0.095447913 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public) Oct 5 05:48:46 localhost podman[298406]: 2025-10-05 09:48:46.70688372 +0000 UTC m=+0.112661522 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:48:46 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:48:48 localhost podman[298424]: 2025-10-05 09:48:48.676951407 +0000 UTC m=+0.083411595 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:48:48 localhost podman[298424]: 2025-10-05 09:48:48.68989747 +0000 UTC m=+0.096357608 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:48 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:48:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21165 DF PROTO=TCP SPT=36038 DPT=9102 SEQ=3889988424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF423DE0000000001030307) Oct 5 05:48:51 localhost nova_compute[297021]: 2025-10-05 09:48:51.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:51 localhost podman[248506]: time="2025-10-05T09:48:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:48:51 localhost podman[248506]: @ - - [05/Oct/2025:09:48:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 137513 "" "Go-http-client/1.1" Oct 5 05:48:51 localhost podman[248506]: @ - - [05/Oct/2025:09:48:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17358 "" "Go-http-client/1.1" Oct 5 05:48:51 localhost nova_compute[297021]: 2025-10-05 09:48:51.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:52 localhost openstack_network_exporter[250601]: ERROR 09:48:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:48:52 localhost openstack_network_exporter[250601]: ERROR 09:48:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:52 localhost openstack_network_exporter[250601]: ERROR 09:48:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:52 localhost openstack_network_exporter[250601]: ERROR 09:48:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:48:52 localhost openstack_network_exporter[250601]: Oct 5 05:48:52 localhost openstack_network_exporter[250601]: ERROR 09:48:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:48:52 localhost openstack_network_exporter[250601]: Oct 5 05:48:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:48:53 localhost podman[298447]: 2025-10-05 09:48:53.675696746 +0000 UTC m=+0.082539461 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:48:53 localhost podman[298447]: 2025-10-05 09:48:53.686467259 +0000 UTC m=+0.093309964 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:48:53 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:48:56 localhost nova_compute[297021]: 2025-10-05 09:48:56.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:56 localhost nova_compute[297021]: 2025-10-05 09:48:56.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:48:57 localhost systemd[1]: tmp-crun.FjwuUD.mount: Deactivated successfully. Oct 5 05:48:57 localhost podman[298469]: 2025-10-05 09:48:57.688642106 +0000 UTC m=+0.097077827 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:48:57 localhost podman[298469]: 2025-10-05 09:48:57.700428127 +0000 UTC m=+0.108863838 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:48:57 localhost podman[298470]: 2025-10-05 09:48:57.738155465 +0000 UTC m=+0.140917042 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 05:48:57 localhost podman[298470]: 2025-10-05 09:48:57.749306029 +0000 UTC m=+0.152067576 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:48:57 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:48:57 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:49:01 localhost nova_compute[297021]: 2025-10-05 09:49:01.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:01 localhost nova_compute[297021]: 2025-10-05 09:49:01.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:02 localhost sshd[298507]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:49:02 localhost nova_compute[297021]: 2025-10-05 09:49:02.570 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:02 localhost nova_compute[297021]: 2025-10-05 09:49:02.571 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:02 localhost nova_compute[297021]: 2025-10-05 09:49:02.596 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:02 localhost nova_compute[297021]: 2025-10-05 09:49:02.596 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:49:02 localhost nova_compute[297021]: 2025-10-05 09:49:02.596 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:49:03 localhost nova_compute[297021]: 2025-10-05 09:49:03.647 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:49:03 localhost nova_compute[297021]: 2025-10-05 09:49:03.648 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:49:03 localhost nova_compute[297021]: 2025-10-05 09:49:03.648 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:49:03 localhost nova_compute[297021]: 2025-10-05 09:49:03.649 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.195 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.213 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.213 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.214 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.215 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.215 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.216 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.216 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.218 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.218 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.218 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.236 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.237 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.237 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.238 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.238 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:49:05 localhost systemd[1]: tmp-crun.djFFjq.mount: Deactivated successfully. Oct 5 05:49:05 localhost podman[298531]: 2025-10-05 09:49:05.707963317 +0000 UTC m=+0.111601742 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:49:05 localhost podman[298531]: 2025-10-05 09:49:05.718074633 +0000 UTC m=+0.121713068 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 05:49:05 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.734 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.810 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:49:05 localhost nova_compute[297021]: 2025-10-05 09:49:05.810 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.046 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.048 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12266MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.049 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.049 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.181 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.181 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.182 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.234 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.706 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.713 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.733 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.773 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:49:06 localhost nova_compute[297021]: 2025-10-05 09:49:06.774 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:49:10 localhost podman[298575]: 2025-10-05 09:49:10.672094623 +0000 UTC m=+0.082617342 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:49:10 localhost podman[298575]: 2025-10-05 09:49:10.716154884 +0000 UTC m=+0.126677603 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:49:10 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:49:11 localhost nova_compute[297021]: 2025-10-05 09:49:11.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:11 localhost nova_compute[297021]: 2025-10-05 09:49:11.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:12 localhost sshd[298601]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:49:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4264 DF PROTO=TCP SPT=33850 DPT=9102 SEQ=863623722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF47D2E0000000001030307) Oct 5 05:49:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4265 DF PROTO=TCP SPT=33850 DPT=9102 SEQ=863623722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF4811D0000000001030307) Oct 5 05:49:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:49:14 localhost podman[298604]: 2025-10-05 09:49:14.67980501 +0000 UTC m=+0.086277653 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 05:49:14 localhost podman[298604]: 2025-10-05 09:49:14.693524694 +0000 UTC m=+0.099997347 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:49:14 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:49:16 localhost nova_compute[297021]: 2025-10-05 09:49:16.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4266 DF PROTO=TCP SPT=33850 DPT=9102 SEQ=863623722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF4891D0000000001030307) Oct 5 05:49:16 localhost nova_compute[297021]: 2025-10-05 09:49:16.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:49:17 localhost podman[298625]: 2025-10-05 09:49:17.00655362 +0000 UTC m=+0.081917705 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:49:17 localhost podman[298625]: 2025-10-05 09:49:17.04874236 +0000 UTC m=+0.124106405 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, version=9.6, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter) Oct 5 05:49:17 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:49:19 localhost podman[298646]: 2025-10-05 09:49:19.684662597 +0000 UTC m=+0.087127696 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:49:19 localhost podman[298646]: 2025-10-05 09:49:19.697854186 +0000 UTC m=+0.100319295 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:49:19 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:49:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:49:20.449 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:49:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:49:20.449 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:49:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:49:20.452 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:49:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4267 DF PROTO=TCP SPT=33850 DPT=9102 SEQ=863623722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF498DD0000000001030307) Oct 5 05:49:21 localhost nova_compute[297021]: 2025-10-05 09:49:21.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:21 localhost podman[248506]: time="2025-10-05T09:49:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:49:21 localhost podman[248506]: @ - - [05/Oct/2025:09:49:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 137513 "" "Go-http-client/1.1" Oct 5 05:49:21 localhost podman[248506]: @ - - [05/Oct/2025:09:49:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17355 "" "Go-http-client/1.1" Oct 5 05:49:21 localhost nova_compute[297021]: 2025-10-05 09:49:21.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:22 localhost openstack_network_exporter[250601]: ERROR 09:49:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:49:22 localhost openstack_network_exporter[250601]: ERROR 09:49:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:22 localhost openstack_network_exporter[250601]: ERROR 09:49:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:22 localhost openstack_network_exporter[250601]: ERROR 09:49:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:49:22 localhost openstack_network_exporter[250601]: Oct 5 05:49:22 localhost openstack_network_exporter[250601]: ERROR 09:49:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:49:22 localhost openstack_network_exporter[250601]: Oct 5 05:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:49:23 localhost podman[298669]: 2025-10-05 09:49:23.849470216 +0000 UTC m=+0.087573097 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:49:23 localhost podman[298669]: 2025-10-05 09:49:23.858730119 +0000 UTC m=+0.096833000 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:49:23 localhost sshd[298687]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:49:23 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:49:26 localhost nova_compute[297021]: 2025-10-05 09:49:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:26 localhost nova_compute[297021]: 2025-10-05 09:49:26.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:49:28 localhost podman[298697]: 2025-10-05 09:49:28.665356494 +0000 UTC m=+0.073444603 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 05:49:28 localhost podman[298697]: 2025-10-05 09:49:28.680765524 +0000 UTC m=+0.088853613 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:49:28 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:49:28 localhost podman[298698]: 2025-10-05 09:49:28.729922783 +0000 UTC m=+0.133229352 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:49:28 localhost podman[298698]: 2025-10-05 09:49:28.748815868 +0000 UTC m=+0.152122457 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 05:49:28 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:49:31 localhost nova_compute[297021]: 2025-10-05 09:49:31.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:31 localhost nova_compute[297021]: 2025-10-05 09:49:31.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:36 localhost nova_compute[297021]: 2025-10-05 09:49:36.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:49:36 localhost podman[298877]: 2025-10-05 09:49:36.690162325 +0000 UTC m=+0.094007694 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:49:36 localhost podman[298877]: 2025-10-05 09:49:36.700949779 +0000 UTC m=+0.104795138 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:49:36 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:49:36 localhost nova_compute[297021]: 2025-10-05 09:49:36.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:39 localhost sshd[298896]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:49:39 localhost systemd-logind[760]: New session 65 of user zuul. Oct 5 05:49:39 localhost systemd[1]: Started Session 65 of User zuul. Oct 5 05:49:39 localhost python3[298918]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:49:40 localhost subscription-manager[298919]: Unregistered machine with identity: 389ffb6f-80ba-4204-ad63-11cd8b0f11fc Oct 5 05:49:40 localhost systemd-journald[47722]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Oct 5 05:49:40 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:49:40 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:49:40 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:49:41 localhost nova_compute[297021]: 2025-10-05 09:49:41.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:49:41 localhost systemd[1]: tmp-crun.RWQxRd.mount: Deactivated successfully. Oct 5 05:49:41 localhost podman[298922]: 2025-10-05 09:49:41.691442353 +0000 UTC m=+0.096328317 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:49:41 localhost podman[298922]: 2025-10-05 09:49:41.797496163 +0000 UTC m=+0.202382157 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller) Oct 5 05:49:41 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:49:41 localhost nova_compute[297021]: 2025-10-05 09:49:41.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7974 DF PROTO=TCP SPT=50138 DPT=9102 SEQ=645001682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF4F25E0000000001030307) Oct 5 05:49:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7975 DF PROTO=TCP SPT=50138 DPT=9102 SEQ=645001682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF4F65E0000000001030307) Oct 5 05:49:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:49:45 localhost podman[298947]: 2025-10-05 09:49:45.688186262 +0000 UTC m=+0.096067109 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Oct 5 05:49:45 localhost podman[298947]: 2025-10-05 09:49:45.70497241 +0000 UTC m=+0.112853267 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3) Oct 5 05:49:45 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:49:46 localhost nova_compute[297021]: 2025-10-05 09:49:46.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7976 DF PROTO=TCP SPT=50138 DPT=9102 SEQ=645001682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF4FE5D0000000001030307) Oct 5 05:49:46 localhost nova_compute[297021]: 2025-10-05 09:49:46.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:49:47 localhost podman[298965]: 2025-10-05 09:49:47.679125759 +0000 UTC m=+0.083877957 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:49:47 localhost podman[298965]: 2025-10-05 09:49:47.697258443 +0000 UTC m=+0.102010651 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:49:47 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:49:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7977 DF PROTO=TCP SPT=50138 DPT=9102 SEQ=645001682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF50E1D0000000001030307) Oct 5 05:49:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:49:50 localhost podman[298986]: 2025-10-05 09:49:50.67823434 +0000 UTC m=+0.082313484 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:49:50 localhost podman[298986]: 2025-10-05 09:49:50.712924684 +0000 UTC m=+0.117003838 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:49:50 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:49:51 localhost nova_compute[297021]: 2025-10-05 09:49:51.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:51 localhost podman[248506]: time="2025-10-05T09:49:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:49:51 localhost podman[248506]: @ - - [05/Oct/2025:09:49:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 137513 "" "Go-http-client/1.1" Oct 5 05:49:51 localhost podman[248506]: @ - - [05/Oct/2025:09:49:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17355 "" "Go-http-client/1.1" Oct 5 05:49:51 localhost nova_compute[297021]: 2025-10-05 09:49:51.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:52 localhost openstack_network_exporter[250601]: ERROR 09:49:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:52 localhost openstack_network_exporter[250601]: ERROR 09:49:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:52 localhost openstack_network_exporter[250601]: ERROR 09:49:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:49:52 localhost openstack_network_exporter[250601]: ERROR 09:49:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:49:52 localhost openstack_network_exporter[250601]: Oct 5 05:49:52 localhost openstack_network_exporter[250601]: ERROR 09:49:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:49:52 localhost openstack_network_exporter[250601]: Oct 5 05:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:49:54 localhost podman[299009]: 2025-10-05 09:49:54.687208657 +0000 UTC m=+0.090295499 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:49:54 localhost podman[299009]: 2025-10-05 09:49:54.69620682 +0000 UTC m=+0.099293662 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:49:54 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:49:56 localhost nova_compute[297021]: 2025-10-05 09:49:56.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:56 localhost nova_compute[297021]: 2025-10-05 09:49:56.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:49:59 localhost podman[299033]: 2025-10-05 09:49:59.678139147 +0000 UTC m=+0.083624621 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:49:59 localhost podman[299033]: 2025-10-05 09:49:59.690862368 +0000 UTC m=+0.096347822 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:49:59 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:49:59 localhost podman[299034]: 2025-10-05 09:49:59.784128668 +0000 UTC m=+0.185441439 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:49:59 localhost podman[299034]: 2025-10-05 09:49:59.799949083 +0000 UTC m=+0.201261814 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd) Oct 5 05:49:59 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:50:01 localhost nova_compute[297021]: 2025-10-05 09:50:01.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:01 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 5 05:50:01 localhost nova_compute[297021]: 2025-10-05 09:50:01.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.776 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.776 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.777 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.777 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.852 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.853 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.853 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.853 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:50:06 localhost nova_compute[297021]: 2025-10-05 09:50:06.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.417 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.433 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.434 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.435 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.435 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.435 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.436 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.436 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.437 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.437 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.438 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.458 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.459 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.459 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.460 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.460 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:50:07 localhost podman[299074]: 2025-10-05 09:50:07.680550992 +0000 UTC m=+0.087737412 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:50:07 localhost podman[299074]: 2025-10-05 09:50:07.714003151 +0000 UTC m=+0.121189561 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:50:07 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:50:07 localhost nova_compute[297021]: 2025-10-05 09:50:07.973 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.034 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.034 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.242 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.244 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12250MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.244 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.245 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.334 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.334 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.335 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.379 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.873 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.880 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.903 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.906 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:50:08 localhost nova_compute[297021]: 2025-10-05 09:50:08.906 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:50:11 localhost nova_compute[297021]: 2025-10-05 09:50:11.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:12 localhost nova_compute[297021]: 2025-10-05 09:50:12.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:50:12 localhost podman[299134]: 2025-10-05 09:50:12.67226694 +0000 UTC m=+0.080415674 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:50:12 localhost podman[299134]: 2025-10-05 09:50:12.707812376 +0000 UTC m=+0.115961090 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:50:12 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:50:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55297 DF PROTO=TCP SPT=59084 DPT=9102 SEQ=4261113459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF5678D0000000001030307) Oct 5 05:50:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55298 DF PROTO=TCP SPT=59084 DPT=9102 SEQ=4261113459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF56B9D0000000001030307) Oct 5 05:50:15 localhost sshd[299213]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:50:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:50:15 localhost podman[299215]: 2025-10-05 09:50:15.921888529 +0000 UTC m=+0.092609102 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 05:50:15 localhost podman[299215]: 2025-10-05 09:50:15.935943337 +0000 UTC m=+0.106663920 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 05:50:15 localhost systemd-logind[760]: New session 66 of user tripleo-admin. Oct 5 05:50:15 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 5 05:50:15 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 5 05:50:15 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:50:15 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 5 05:50:15 localhost systemd[1]: Starting User Manager for UID 1003... Oct 5 05:50:16 localhost systemd[299237]: Queued start job for default target Main User Target. Oct 5 05:50:16 localhost systemd[299237]: Created slice User Application Slice. Oct 5 05:50:16 localhost systemd[299237]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 05:50:16 localhost systemd[299237]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:50:16 localhost systemd[299237]: Reached target Paths. Oct 5 05:50:16 localhost systemd[299237]: Reached target Timers. Oct 5 05:50:16 localhost systemd[299237]: Starting D-Bus User Message Bus Socket... Oct 5 05:50:16 localhost systemd[299237]: Starting Create User's Volatile Files and Directories... Oct 5 05:50:16 localhost systemd[299237]: Finished Create User's Volatile Files and Directories. Oct 5 05:50:16 localhost systemd[299237]: Listening on D-Bus User Message Bus Socket. Oct 5 05:50:16 localhost systemd[299237]: Reached target Sockets. Oct 5 05:50:16 localhost systemd[299237]: Reached target Basic System. Oct 5 05:50:16 localhost systemd[299237]: Reached target Main User Target. Oct 5 05:50:16 localhost systemd[299237]: Startup finished in 130ms. Oct 5 05:50:16 localhost systemd[1]: Started User Manager for UID 1003. Oct 5 05:50:16 localhost systemd[1]: Started Session 66 of User tripleo-admin. Oct 5 05:50:16 localhost nova_compute[297021]: 2025-10-05 09:50:16.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:1f:49:af MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55299 DF PROTO=TCP SPT=59084 DPT=9102 SEQ=4261113459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080ABEF5739D0000000001030307) Oct 5 05:50:17 localhost nova_compute[297021]: 2025-10-05 09:50:17.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:17 localhost python3[299380]: ansible-ansible.builtin.systemd Invoked with name=iptables state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:50:18 localhost podman[299525]: 2025-10-05 09:50:18.108975248 +0000 UTC m=+0.086378514 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:50:18 localhost podman[299525]: 2025-10-05 09:50:18.127517517 +0000 UTC m=+0.104920743 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:50:18 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:50:18 localhost python3[299526]: ansible-ansible.builtin.systemd Invoked with name=nftables state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:50:18 localhost systemd[1]: Stopping Netfilter Tables... Oct 5 05:50:18 localhost systemd[1]: nftables.service: Deactivated successfully. Oct 5 05:50:18 localhost systemd[1]: Stopped Netfilter Tables. Oct 5 05:50:19 localhost python3[299695]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/tripleo-rules.nft block=# 100 ceph_alertmanager {'dport': [9093]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard {'dport': [8443]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana {'dport': [3100]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus {'dport': [9092]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw {'dport': ['8080']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon {'dport': [6789, 3300, '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds {'dport': ['6800-7300', '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr {'dport': ['6800-7300', 8444]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs {'dport': ['12049', '2049']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 2049 } ct state new counter accept comment "120 ceph_nfs"#012# 122 ceph rgw {'dport': ['8080', '8080', '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8080,8080,9100 } ct state new counter accept comment "122 ceph rgw"#012# 123 ceph_dashboard {'dport': [3100, 9090, 9092, 9093, 9094, 9100, 9283]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 3100,9090,9092,9093,9094,9100,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:50:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:50:20.450 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:50:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:50:20.450 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:50:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:50:20.451 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:50:21 localhost nova_compute[297021]: 2025-10-05 09:50:21.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:21 localhost podman[248506]: time="2025-10-05T09:50:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:50:21 localhost podman[248506]: @ - - [05/Oct/2025:09:50:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 137513 "" "Go-http-client/1.1" Oct 5 05:50:21 localhost podman[248506]: @ - - [05/Oct/2025:09:50:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17349 "" "Go-http-client/1.1" Oct 5 05:50:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:50:21 localhost podman[299713]: 2025-10-05 09:50:21.669611653 +0000 UTC m=+0.077176776 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:50:21 localhost podman[299713]: 2025-10-05 09:50:21.681819002 +0000 UTC m=+0.089384115 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:50:21 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:50:22 localhost openstack_network_exporter[250601]: ERROR 09:50:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:22 localhost openstack_network_exporter[250601]: ERROR 09:50:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:50:22 localhost openstack_network_exporter[250601]: ERROR 09:50:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:22 localhost openstack_network_exporter[250601]: ERROR 09:50:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:50:22 localhost openstack_network_exporter[250601]: Oct 5 05:50:22 localhost openstack_network_exporter[250601]: ERROR 09:50:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:50:22 localhost openstack_network_exporter[250601]: Oct 5 05:50:22 localhost nova_compute[297021]: 2025-10-05 09:50:22.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:50:25 localhost systemd[1]: tmp-crun.aLRmSk.mount: Deactivated successfully. Oct 5 05:50:25 localhost podman[299736]: 2025-10-05 09:50:25.666354269 +0000 UTC m=+0.080499097 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:50:25 localhost podman[299736]: 2025-10-05 09:50:25.677859548 +0000 UTC m=+0.092004416 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:50:25 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:50:26 localhost nova_compute[297021]: 2025-10-05 09:50:26.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:27 localhost nova_compute[297021]: 2025-10-05 09:50:27.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:50:30 localhost systemd[1]: tmp-crun.Tt89hr.mount: Deactivated successfully. Oct 5 05:50:30 localhost podman[299796]: 2025-10-05 09:50:30.315156255 +0000 UTC m=+0.087057842 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:50:30 localhost podman[299796]: 2025-10-05 09:50:30.327659572 +0000 UTC m=+0.099561169 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:50:30 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:50:30 localhost podman[299797]: 2025-10-05 09:50:30.409418761 +0000 UTC m=+0.178304707 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd) Oct 5 05:50:30 localhost podman[299797]: 2025-10-05 09:50:30.424925169 +0000 UTC m=+0.193811165 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:50:30 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:50:31 localhost nova_compute[297021]: 2025-10-05 09:50:31.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:32 localhost nova_compute[297021]: 2025-10-05 09:50:32.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:36 localhost nova_compute[297021]: 2025-10-05 09:50:36.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:37 localhost nova_compute[297021]: 2025-10-05 09:50:37.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:50:38 localhost podman[299974]: 2025-10-05 09:50:38.700740677 +0000 UTC m=+0.101047949 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent) Oct 5 05:50:38 localhost podman[299974]: 2025-10-05 09:50:38.73021597 +0000 UTC m=+0.130523252 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 05:50:38 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.834 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.836 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.843 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c6ae3ab6-618a-41cd-90a5-d7a4e5967d87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.836264', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bf9c25c4-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'e3a9f1948c35cce2de81ab38ae35ca2b64ce4328f65863dcdf8fb66356c34673'}]}, 'timestamp': '2025-10-05 09:50:38.843959', '_unique_id': 'd5b2adef539045938bc29688655d4e9e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.845 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.846 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.862 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.862 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '75f48e50-9153-449e-96e2-ece1d595b2a7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.847027', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf9f10d6-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.070945784, 'message_signature': '7a4376aa4fd9ef4231836a8351b30eb2eb55905dadc362f09ff5c3d628e73878'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.847027', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf9f21de-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.070945784, 'message_signature': '9ce6198d908627559c6dd41d8b352c2b8a73c59b41829c1df116b579531fe22a'}]}, 'timestamp': '2025-10-05 09:50:38.863439', '_unique_id': '66342b83ae674b7fb3036f0bf77d91d7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.864 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.865 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.865 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '393c22d9-7f10-4fde-9b46-14a9397530e1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.865937', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bf9f95d8-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'af33a478927b7a90bed44c707e09e1b51797e79c8019b0ce3de7aa60ff7702ed'}]}, 'timestamp': '2025-10-05 09:50:38.866438', '_unique_id': '3865dbac82aa4103901a06805226b650'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.867 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.868 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.868 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ed37af7d-229b-4091-9c67-1dc7a7f5dc17', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.868594', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bf9ffe74-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': '6e7e7359164bcaeda79378736f7503016f9a8b94c0e06ce3a0eb942f34573bbb'}]}, 'timestamp': '2025-10-05 09:50:38.869128', '_unique_id': '01e3bd1d1e5944f5842b20ceddea122b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.870 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.871 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 11810000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd932b2c0-d0ca-4daf-97dc-253b422e518f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 11810000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:50:38.871486', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'bfa30844-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.112304017, 'message_signature': '2f4fb4c6df38d3ad7d2a1d97cb771fd584c3dcd9a93968212b5d30d7178efc76'}]}, 'timestamp': '2025-10-05 09:50:38.888990', '_unique_id': '8cfae74c61794ce18a614fbad15d2355'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.891 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3fc4bb74-9150-49a4-a170-e93b821c33e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.892032', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfa39160-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.070945784, 'message_signature': 'cfcd7c1ba533d48c271fd398d2c11a20ff47bb759c92e4b65956606f9b9eb90a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.892032', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfa3a600-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.070945784, 'message_signature': '7a695d36a31396b09e8aed0aea51b623e138a110314bea77fabcdc332620fdea'}]}, 'timestamp': '2025-10-05 09:50:38.893009', '_unique_id': 'cd2c7d57c8fb4886a4956ac6991d913c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.894 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.895 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a21d9d10-e0a7-4a69-b18f-542bbb7ca771', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.895295', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfa416c6-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'dced2d8b85304ea7f1ec1d151cfe8b8cf59f4c9c356b669055dc9d7e9dc33461'}]}, 'timestamp': '2025-10-05 09:50:38.896025', '_unique_id': 'f07347f6a02f44c18e1a5af153f98d83'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.898 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.898 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bfc6cc54-5dad-428a-b4d2-a547cad3d6f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.898232', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfa48480-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': '4f6b57f76045a84063ac28e0a4c1c71ba943dcfd551deb7deada2ecd89d96ad7'}]}, 'timestamp': '2025-10-05 09:50:38.898733', '_unique_id': 'a82f135a35f549818df681a3d4551719'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.900 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.901 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a727e28-00d2-49f7-931d-525db79afa9d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.901065', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfa4f352-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'c7c867ba570b61887f5c878199ecd7ae9224b3780377d1855bc6dd0c9e31edb8'}]}, 'timestamp': '2025-10-05 09:50:38.901603', '_unique_id': '4f713b40f0cb4541afdb46c5e8d8c53f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.903 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.925 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a272812-aeb6-459f-9062-fd3a1afdea93', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.903907', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfa8b41a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': 'd9857c7ab2bb0cffb360d889408bc1248f0db0d53c4983bbf65abb7c3c3b9942'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.903907', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfa8c4a0-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '72c139a8c303ae453fc155656aa98a2f3377a15cd4c9f0c62898545ff7006d07'}]}, 'timestamp': '2025-10-05 09:50:38.926599', '_unique_id': 'd9823264adc74d36ad324e97b624d214'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.928 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd62e49da-093b-4b83-a54e-18bfe55061aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.929022', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfa9373c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '8bb8dcb65d0eb1c9960975b8657ceb8f7554f46b5b1c18fd1eb01f76f40c50ea'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.929022', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfa9495c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '925ce69502dacc9258b006f99c29cf844f07cda2191f02384c679c6a3b4edca9'}]}, 'timestamp': '2025-10-05 09:50:38.929955', '_unique_id': '3efc908f96fd441bb894c8b9f4f5fdbf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.932 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b8f05632-f8d8-4775-8f41-b7d1bc1ec725', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.932230', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfa9b45a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.070945784, 'message_signature': 'e1c797165f2599c5d26ff7d7ea793cbefb54395efb8f8d89a4ff5b1f6df35652'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.932230', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfa9c508-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.070945784, 'message_signature': '967c9dc685c71f1f0a504896917350019fa197346c27393eb93da3ac43aa51bc'}]}, 'timestamp': '2025-10-05 09:50:38.933116', '_unique_id': '6635e75d49af4af18528ee3830f541b6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.935 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae3e9482-30bf-45de-9b41-b04da531c030', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.935463', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfaa31d2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'ebb033297bbd5f1013f00a3ef0b57071cb3ce7d65637b259e8b8b33a99f543ea'}]}, 'timestamp': '2025-10-05 09:50:38.935938', '_unique_id': '012a6bc724624ef991e426bfe4bb0653'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'db95a63d-f8bb-46fd-ac2e-1529a4382364', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.938266', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfaaa040-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': 'f8905469e8ad74663bb8de364c6a0b013c0eaf33fa5d5f4f1730f7a14d0d1f9a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.938266', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfaab670-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': 'ef213e821e2fdf5e69a9698ddbd849f1d4f5003d145bec6ea6ecad97ee7873d0'}]}, 'timestamp': '2025-10-05 09:50:38.939297', '_unique_id': '01651e5c04bb4dc69a272c5719f947e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f09698dd-9708-44af-b00e-08ddd13a1f02', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.941770', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfab28f8-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'c0e6181aaf588d8791866aa73341a83a48c4b2c8a532b3105f647f18be709f81'}]}, 'timestamp': '2025-10-05 09:50:38.942327', '_unique_id': '8ec7bbbbcb9e42abac2f2dd99bed28b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.944 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd6e89140-9939-4339-9d7a-0090741fd2e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.944583', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfab959a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '84352b9279f81bcbc18740009a1aafe2b15d1d1a7c3e172828e52ee4101e5599'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.944583', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfaba738-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': 'fb56c69afa4ad5dee87929e7ad99ab976b342959d00d32c85c61fe8672c91ecd'}]}, 'timestamp': '2025-10-05 09:50:38.945501', '_unique_id': '932e7d22154b4dc7b11fc13620b3bddb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e505b686-c271-49d3-bbff-3e1baebc1591', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.947754', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfac11c8-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': 'd3e2a279285ec19628ae3d96acb68f666e08c350c61a969a089e115562c2503d'}]}, 'timestamp': '2025-10-05 09:50:38.948221', '_unique_id': '8f2f69e30f934621ba934deebcf5457e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.950 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.950 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '42c0a0d8-5749-4c2a-8a0c-e4868e970ccc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.950488', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfac811c-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '56b1428f628ee8eee30ddadba898a6dc11da0caaefe063162efb17d7d52f6201'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.950488', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfac9896-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '55c09ebf8ee03deb1c9ab1b0650e2bb38eb1f0167cba3003b20fdc2accc3e0d3'}]}, 'timestamp': '2025-10-05 09:50:38.951663', '_unique_id': '4f09f8b1a63a486888e3da78c7c68e4a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67f0bb33-2ae2-4612-97cf-391c1ffa2594', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:50:38.954324', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'bfad13f2-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.112304017, 'message_signature': '342f364d05ff084878fd9f3becf692ec8e4a10b525533c70d4e55db11ed9eb89'}]}, 'timestamp': '2025-10-05 09:50:38.954820', '_unique_id': '0acaae66b0ec4d1e8fbf70fa316379be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.957 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2a8e6284-0110-46f9-b938-9b02d2aae916', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:50:38.957230', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bfad8562-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '56a66ac29d607791cf580823bba144cc8cc666b9fc92200c7f5d58c1f6280b0c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:50:38.957230', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bfad975a-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.127847824, 'message_signature': '351660ffbcfdc4bc57da1862d9930ad5ff7b3ba1c75d971d21898ac351ae47e4'}]}, 'timestamp': '2025-10-05 09:50:38.958164', '_unique_id': '2f2ddd9a9473476f82317c29689ccb33'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.960 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.960 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '30faa333-894e-4372-b5e4-95d5e8b811e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:50:38.960422', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'bfae0122-a1d0-11f0-9396-fa163ec6f33d', 'monotonic_time': 11318.060174454, 'message_signature': '4d0bccd0851f4aeb64c44753bf19b15ecf83f7b3d2f0f313f3f5f301ad4256ec'}]}, 'timestamp': '2025-10-05 09:50:38.960980', '_unique_id': '172a336fd89347f592c17baabc35c3b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:50:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:50:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 05:50:40 localhost systemd[1]: session-65.scope: Deactivated successfully. Oct 5 05:50:40 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Oct 5 05:50:40 localhost systemd-logind[760]: Removed session 65. Oct 5 05:50:41 localhost nova_compute[297021]: 2025-10-05 09:50:41.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:42 localhost nova_compute[297021]: 2025-10-05 09:50:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:42 localhost podman[300070]: Oct 5 05:50:42 localhost podman[300070]: 2025-10-05 09:50:42.374526007 +0000 UTC m=+0.084882454 container create 50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nobel, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:50:42 localhost systemd[1]: Started libpod-conmon-50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c.scope. Oct 5 05:50:42 localhost systemd[1]: Started libcrun container. Oct 5 05:50:42 localhost podman[300070]: 2025-10-05 09:50:42.339932757 +0000 UTC m=+0.050289234 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:50:42 localhost podman[300070]: 2025-10-05 09:50:42.456525492 +0000 UTC m=+0.166881929 container init 50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nobel, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container) Oct 5 05:50:42 localhost podman[300070]: 2025-10-05 09:50:42.468444233 +0000 UTC m=+0.178800680 container start 50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nobel, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:50:42 localhost podman[300070]: 2025-10-05 09:50:42.470373095 +0000 UTC m=+0.180729542 container attach 50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nobel, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:50:42 localhost awesome_nobel[300085]: 167 167 Oct 5 05:50:42 localhost systemd[1]: libpod-50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c.scope: Deactivated successfully. Oct 5 05:50:42 localhost podman[300070]: 2025-10-05 09:50:42.473940651 +0000 UTC m=+0.184297088 container died 50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nobel, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12, RELEASE=main) Oct 5 05:50:42 localhost podman[300090]: 2025-10-05 09:50:42.585386868 +0000 UTC m=+0.095928851 container remove 50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nobel, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph) Oct 5 05:50:42 localhost systemd[1]: libpod-conmon-50a02a4c440cb0f5188fe3693de264db7068468e4af40700dea0ec6249fecf7c.scope: Deactivated successfully. Oct 5 05:50:42 localhost systemd[1]: Reloading. Oct 5 05:50:42 localhost systemd-sysv-generator[300136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:50:42 localhost systemd-rc-local-generator[300132]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:50:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:50:42 localhost systemd[1]: var-lib-containers-storage-overlay-57feff3eb59eee39e0ea04bd6b7896883c40f9970209190151354953d592b02e-merged.mount: Deactivated successfully. Oct 5 05:50:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:50:43 localhost systemd[1]: Reloading. Oct 5 05:50:43 localhost podman[300144]: 2025-10-05 09:50:43.131084367 +0000 UTC m=+0.082925531 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:50:43 localhost podman[300144]: 2025-10-05 09:50:43.176799447 +0000 UTC m=+0.128640671 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 05:50:43 localhost systemd-rc-local-generator[300198]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:50:43 localhost systemd-sysv-generator[300203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:50:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:50:43 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:50:43 localhost systemd[1]: Starting Ceph mds.mds.np0005471150.bsiqok for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:50:43 localhost podman[300260]: Oct 5 05:50:43 localhost podman[300260]: 2025-10-05 09:50:43.830827559 +0000 UTC m=+0.081478322 container create d456beed25d76a5c13b7ae346f92eb96f00519ed441a63f07f2f02127c03dc0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471150-bsiqok, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, architecture=x86_64, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, vcs-type=git, GIT_BRANCH=main, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:50:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716f8c68e34beafed8dd83a8e16fd584c5e74cdfc4e4292f56ce54399a64fe84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716f8c68e34beafed8dd83a8e16fd584c5e74cdfc4e4292f56ce54399a64fe84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716f8c68e34beafed8dd83a8e16fd584c5e74cdfc4e4292f56ce54399a64fe84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716f8c68e34beafed8dd83a8e16fd584c5e74cdfc4e4292f56ce54399a64fe84/merged/var/lib/ceph/mds/ceph-mds.np0005471150.bsiqok supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:43 localhost podman[300260]: 2025-10-05 09:50:43.89518899 +0000 UTC m=+0.145839753 container init d456beed25d76a5c13b7ae346f92eb96f00519ed441a63f07f2f02127c03dc0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471150-bsiqok, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Oct 5 05:50:43 localhost podman[300260]: 2025-10-05 09:50:43.796935308 +0000 UTC m=+0.047586081 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:50:43 localhost podman[300260]: 2025-10-05 09:50:43.903556966 +0000 UTC m=+0.154207729 container start d456beed25d76a5c13b7ae346f92eb96f00519ed441a63f07f2f02127c03dc0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471150-bsiqok, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7) Oct 5 05:50:43 localhost bash[300260]: d456beed25d76a5c13b7ae346f92eb96f00519ed441a63f07f2f02127c03dc0b Oct 5 05:50:43 localhost systemd[1]: Started Ceph mds.mds.np0005471150.bsiqok for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:50:43 localhost ceph-mds[300279]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:50:43 localhost ceph-mds[300279]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Oct 5 05:50:43 localhost ceph-mds[300279]: main not setting numa affinity Oct 5 05:50:43 localhost ceph-mds[300279]: pidfile_write: ignore empty --pid-file Oct 5 05:50:43 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471150-bsiqok[300275]: starting mds.mds.np0005471150.bsiqok at Oct 5 05:50:43 localhost ceph-mds[300279]: mds.mds.np0005471150.bsiqok Updating MDS map to version 8 from mon.0 Oct 5 05:50:44 localhost ceph-mds[300279]: mds.mds.np0005471150.bsiqok Updating MDS map to version 9 from mon.0 Oct 5 05:50:44 localhost ceph-mds[300279]: mds.mds.np0005471150.bsiqok Monitors have assigned me to become a standby. Oct 5 05:50:45 localhost podman[300426]: 2025-10-05 09:50:45.558728307 +0000 UTC m=+0.084527315 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:50:45 localhost podman[300426]: 2025-10-05 09:50:45.683320408 +0000 UTC m=+0.209119416 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, distribution-scope=public, ceph=True, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:50:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:50:46 localhost podman[300496]: 2025-10-05 09:50:46.122114001 +0000 UTC m=+0.101614394 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Oct 5 05:50:46 localhost podman[300496]: 2025-10-05 09:50:46.145346446 +0000 UTC m=+0.124846829 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:50:46 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:50:46 localhost nova_compute[297021]: 2025-10-05 09:50:46.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:47 localhost nova_compute[297021]: 2025-10-05 09:50:47.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:50:48 localhost podman[300567]: 2025-10-05 09:50:48.686883899 +0000 UTC m=+0.091880192 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git) Oct 5 05:50:48 localhost podman[300567]: 2025-10-05 09:50:48.70474894 +0000 UTC m=+0.109745233 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 05:50:48 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:50:51 localhost nova_compute[297021]: 2025-10-05 09:50:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:51 localhost podman[248506]: time="2025-10-05T09:50:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:50:51 localhost podman[248506]: @ - - [05/Oct/2025:09:50:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139591 "" "Go-http-client/1.1" Oct 5 05:50:51 localhost podman[248506]: @ - - [05/Oct/2025:09:50:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17841 "" "Go-http-client/1.1" Oct 5 05:50:52 localhost openstack_network_exporter[250601]: ERROR 09:50:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:50:52 localhost openstack_network_exporter[250601]: ERROR 09:50:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:52 localhost openstack_network_exporter[250601]: ERROR 09:50:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:52 localhost openstack_network_exporter[250601]: ERROR 09:50:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:50:52 localhost openstack_network_exporter[250601]: Oct 5 05:50:52 localhost openstack_network_exporter[250601]: ERROR 09:50:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:50:52 localhost openstack_network_exporter[250601]: Oct 5 05:50:52 localhost nova_compute[297021]: 2025-10-05 09:50:52.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:50:52 localhost podman[300590]: 2025-10-05 09:50:52.677514541 +0000 UTC m=+0.087233748 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:50:52 localhost podman[300590]: 2025-10-05 09:50:52.686196334 +0000 UTC m=+0.095915551 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:50:52 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:50:56 localhost nova_compute[297021]: 2025-10-05 09:50:56.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:50:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:50:56 localhost systemd[1]: tmp-crun.SN4zNU.mount: Deactivated successfully. Oct 5 05:50:56 localhost podman[300612]: 2025-10-05 09:50:56.686643718 +0000 UTC m=+0.089366805 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:50:56 localhost podman[300612]: 2025-10-05 09:50:56.69789911 +0000 UTC m=+0.100622187 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:50:56 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:50:57 localhost nova_compute[297021]: 2025-10-05 09:50:57.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:51:00 localhost podman[300638]: 2025-10-05 09:51:00.678589705 +0000 UTC m=+0.084329329 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:51:00 localhost podman[300638]: 2025-10-05 09:51:00.694184095 +0000 UTC m=+0.099923749 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:51:00 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:51:00 localhost systemd[1]: tmp-crun.8w6bhG.mount: Deactivated successfully. Oct 5 05:51:00 localhost podman[300637]: 2025-10-05 09:51:00.793341612 +0000 UTC m=+0.198489901 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:51:00 localhost podman[300637]: 2025-10-05 09:51:00.83045578 +0000 UTC m=+0.235604029 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid) Oct 5 05:51:00 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:51:01 localhost nova_compute[297021]: 2025-10-05 09:51:01.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:02 localhost nova_compute[297021]: 2025-10-05 09:51:02.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.548 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.549 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.566 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.566 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.567 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.768 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.769 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.769 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:51:06 localhost nova_compute[297021]: 2025-10-05 09:51:06.770 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.133 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.159 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.160 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.161 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.161 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.162 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.162 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.162 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.163 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.163 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.164 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.190 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.190 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.191 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.191 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.192 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.639 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.704 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.704 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.915 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.916 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12224MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.917 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.917 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.982 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.983 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:51:07 localhost nova_compute[297021]: 2025-10-05 09:51:07.983 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:51:08 localhost nova_compute[297021]: 2025-10-05 09:51:08.018 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:51:08 localhost nova_compute[297021]: 2025-10-05 09:51:08.473 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:51:08 localhost nova_compute[297021]: 2025-10-05 09:51:08.481 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:51:08 localhost nova_compute[297021]: 2025-10-05 09:51:08.502 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:51:08 localhost nova_compute[297021]: 2025-10-05 09:51:08.505 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:51:08 localhost nova_compute[297021]: 2025-10-05 09:51:08.505 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:51:09 localhost systemd[1]: tmp-crun.PSwoKN.mount: Deactivated successfully. Oct 5 05:51:09 localhost podman[300719]: 2025-10-05 09:51:09.682706952 +0000 UTC m=+0.094220445 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 05:51:09 localhost podman[300719]: 2025-10-05 09:51:09.691878979 +0000 UTC m=+0.103392462 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:51:09 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:51:11 localhost nova_compute[297021]: 2025-10-05 09:51:11.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:12 localhost nova_compute[297021]: 2025-10-05 09:51:12.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:51:13 localhost systemd[1]: tmp-crun.PhHW4U.mount: Deactivated successfully. Oct 5 05:51:13 localhost podman[300737]: 2025-10-05 09:51:13.687797993 +0000 UTC m=+0.099005804 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:51:13 localhost podman[300737]: 2025-10-05 09:51:13.768074413 +0000 UTC m=+0.179282264 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 05:51:13 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:51:16 localhost nova_compute[297021]: 2025-10-05 09:51:16.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:51:16 localhost podman[300762]: 2025-10-05 09:51:16.699301408 +0000 UTC m=+0.092177190 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 05:51:16 localhost podman[300762]: 2025-10-05 09:51:16.711921837 +0000 UTC m=+0.104797619 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:51:16 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:51:17 localhost nova_compute[297021]: 2025-10-05 09:51:17.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:19 localhost systemd[1]: session-66.scope: Deactivated successfully. Oct 5 05:51:19 localhost systemd[1]: session-66.scope: Consumed 2.044s CPU time. Oct 5 05:51:19 localhost systemd-logind[760]: Session 66 logged out. Waiting for processes to exit. Oct 5 05:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:51:19 localhost systemd-logind[760]: Removed session 66. Oct 5 05:51:19 localhost systemd[1]: tmp-crun.c8gB4f.mount: Deactivated successfully. Oct 5 05:51:19 localhost podman[300802]: 2025-10-05 09:51:19.378956826 +0000 UTC m=+0.096355372 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:51:19 localhost podman[300802]: 2025-10-05 09:51:19.419152288 +0000 UTC m=+0.136550884 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:51:19 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:51:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:51:20.451 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:51:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:51:20.451 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:51:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:51:20.452 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:51:21 localhost nova_compute[297021]: 2025-10-05 09:51:21.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:21 localhost podman[248506]: time="2025-10-05T09:51:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:51:21 localhost podman[248506]: @ - - [05/Oct/2025:09:51:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139591 "" "Go-http-client/1.1" Oct 5 05:51:21 localhost podman[248506]: @ - - [05/Oct/2025:09:51:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17838 "" "Go-http-client/1.1" Oct 5 05:51:22 localhost openstack_network_exporter[250601]: ERROR 09:51:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:51:22 localhost openstack_network_exporter[250601]: ERROR 09:51:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:22 localhost openstack_network_exporter[250601]: ERROR 09:51:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:51:22 localhost openstack_network_exporter[250601]: Oct 5 05:51:22 localhost openstack_network_exporter[250601]: ERROR 09:51:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:22 localhost openstack_network_exporter[250601]: ERROR 09:51:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:51:22 localhost openstack_network_exporter[250601]: Oct 5 05:51:22 localhost nova_compute[297021]: 2025-10-05 09:51:22.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:51:23 localhost podman[300822]: 2025-10-05 09:51:23.662179169 +0000 UTC m=+0.072358037 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:51:23 localhost podman[300822]: 2025-10-05 09:51:23.674998654 +0000 UTC m=+0.085177532 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:51:23 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:51:26 localhost nova_compute[297021]: 2025-10-05 09:51:26.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:27 localhost nova_compute[297021]: 2025-10-05 09:51:27.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:51:27 localhost systemd[1]: tmp-crun.NAr7I5.mount: Deactivated successfully. Oct 5 05:51:27 localhost podman[300845]: 2025-10-05 09:51:27.6880543 +0000 UTC m=+0.097745981 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:51:27 localhost podman[300845]: 2025-10-05 09:51:27.723374289 +0000 UTC m=+0.133065930 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:51:27 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:51:29 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 5 05:51:29 localhost systemd[299237]: Activating special unit Exit the Session... Oct 5 05:51:29 localhost systemd[299237]: Stopped target Main User Target. Oct 5 05:51:29 localhost systemd[299237]: Stopped target Basic System. Oct 5 05:51:29 localhost systemd[299237]: Stopped target Paths. Oct 5 05:51:29 localhost systemd[299237]: Stopped target Sockets. Oct 5 05:51:29 localhost systemd[299237]: Stopped target Timers. Oct 5 05:51:29 localhost systemd[299237]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 05:51:29 localhost systemd[299237]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:51:29 localhost systemd[299237]: Closed D-Bus User Message Bus Socket. Oct 5 05:51:29 localhost systemd[299237]: Stopped Create User's Volatile Files and Directories. Oct 5 05:51:29 localhost systemd[299237]: Removed slice User Application Slice. Oct 5 05:51:29 localhost systemd[299237]: Reached target Shutdown. Oct 5 05:51:29 localhost systemd[299237]: Finished Exit the Session. Oct 5 05:51:29 localhost systemd[299237]: Reached target Exit the Session. Oct 5 05:51:29 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 5 05:51:29 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 5 05:51:29 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 5 05:51:29 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 5 05:51:29 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 5 05:51:29 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 5 05:51:29 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 5 05:51:29 localhost systemd[1]: user-1003.slice: Consumed 2.454s CPU time. Oct 5 05:51:31 localhost nova_compute[297021]: 2025-10-05 09:51:31.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:51:31 localhost podman[300870]: 2025-10-05 09:51:31.686550903 +0000 UTC m=+0.088696107 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:51:31 localhost podman[300870]: 2025-10-05 09:51:31.727851414 +0000 UTC m=+0.129996648 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:51:31 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:51:31 localhost podman[300871]: 2025-10-05 09:51:31.741762269 +0000 UTC m=+0.139855683 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 05:51:31 localhost podman[300871]: 2025-10-05 09:51:31.758075197 +0000 UTC m=+0.156168601 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:51:31 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:51:32 localhost nova_compute[297021]: 2025-10-05 09:51:32.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:36 localhost nova_compute[297021]: 2025-10-05 09:51:36.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:37 localhost nova_compute[297021]: 2025-10-05 09:51:37.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:51:40 localhost systemd[1]: tmp-crun.eUYmrI.mount: Deactivated successfully. Oct 5 05:51:40 localhost podman[301031]: 2025-10-05 09:51:40.18316476 +0000 UTC m=+0.104515862 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:51:40 localhost podman[301031]: 2025-10-05 09:51:40.193106277 +0000 UTC m=+0.114457409 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:51:40 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:51:41 localhost nova_compute[297021]: 2025-10-05 09:51:41.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:42 localhost nova_compute[297021]: 2025-10-05 09:51:42.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:51:44 localhost systemd[1]: tmp-crun.jab55b.mount: Deactivated successfully. Oct 5 05:51:44 localhost podman[301048]: 2025-10-05 09:51:44.698548387 +0000 UTC m=+0.094271868 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:51:44 localhost podman[301048]: 2025-10-05 09:51:44.765734744 +0000 UTC m=+0.161458245 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:51:44 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:51:46 localhost nova_compute[297021]: 2025-10-05 09:51:46.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:51:47 localhost systemd[1]: tmp-crun.dHgK6X.mount: Deactivated successfully. Oct 5 05:51:47 localhost podman[301073]: 2025-10-05 09:51:47.011706246 +0000 UTC m=+0.099226049 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:51:47 localhost podman[301073]: 2025-10-05 09:51:47.022875537 +0000 UTC m=+0.110395340 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:51:47 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:51:47 localhost nova_compute[297021]: 2025-10-05 09:51:47.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:51:49 localhost podman[301091]: 2025-10-05 09:51:49.682755953 +0000 UTC m=+0.089824907 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:51:49 localhost podman[301091]: 2025-10-05 09:51:49.695724022 +0000 UTC m=+0.102793016 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41) Oct 5 05:51:49 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:51:51 localhost podman[248506]: time="2025-10-05T09:51:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:51:51 localhost podman[248506]: @ - - [05/Oct/2025:09:51:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139591 "" "Go-http-client/1.1" Oct 5 05:51:51 localhost nova_compute[297021]: 2025-10-05 09:51:51.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:51 localhost podman[248506]: @ - - [05/Oct/2025:09:51:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17844 "" "Go-http-client/1.1" Oct 5 05:51:52 localhost openstack_network_exporter[250601]: ERROR 09:51:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:52 localhost openstack_network_exporter[250601]: ERROR 09:51:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:52 localhost openstack_network_exporter[250601]: ERROR 09:51:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:51:52 localhost openstack_network_exporter[250601]: ERROR 09:51:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:51:52 localhost openstack_network_exporter[250601]: Oct 5 05:51:52 localhost openstack_network_exporter[250601]: ERROR 09:51:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:51:52 localhost openstack_network_exporter[250601]: Oct 5 05:51:52 localhost nova_compute[297021]: 2025-10-05 09:51:52.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:51:54 localhost podman[301111]: 2025-10-05 09:51:54.676345554 +0000 UTC m=+0.083716352 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:51:54 localhost podman[301111]: 2025-10-05 09:51:54.690962418 +0000 UTC m=+0.098333206 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:51:54 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:51:56 localhost nova_compute[297021]: 2025-10-05 09:51:56.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:57 localhost nova_compute[297021]: 2025-10-05 09:51:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:51:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:51:58 localhost podman[301134]: 2025-10-05 09:51:58.683569392 +0000 UTC m=+0.092815187 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:51:58 localhost podman[301134]: 2025-10-05 09:51:58.691468735 +0000 UTC m=+0.100714520 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:51:58 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:51:59 localhost nova_compute[297021]: 2025-10-05 09:51:59.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:59 localhost nova_compute[297021]: 2025-10-05 09:51:59.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:51:59 localhost nova_compute[297021]: 2025-10-05 09:51:59.442 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:51:59 localhost nova_compute[297021]: 2025-10-05 09:51:59.442 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:59 localhost nova_compute[297021]: 2025-10-05 09:51:59.443 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:51:59 localhost nova_compute[297021]: 2025-10-05 09:51:59.456 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:01 localhost nova_compute[297021]: 2025-10-05 09:52:01.470 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:01 localhost nova_compute[297021]: 2025-10-05 09:52:01.471 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:01 localhost nova_compute[297021]: 2025-10-05 09:52:01.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.447 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.447 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.448 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.448 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.448 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:52:02 localhost systemd[1]: tmp-crun.ZcA13s.mount: Deactivated successfully. Oct 5 05:52:02 localhost podman[301163]: 2025-10-05 09:52:02.694057451 +0000 UTC m=+0.098961785 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:52:02 localhost podman[301163]: 2025-10-05 09:52:02.742033463 +0000 UTC m=+0.146937767 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:52:02 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:52:02 localhost podman[301161]: 2025-10-05 09:52:02.741760935 +0000 UTC m=+0.145846347 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:52:02 localhost podman[301161]: 2025-10-05 09:52:02.82601429 +0000 UTC m=+0.230099752 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:52:02 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:52:02 localhost nova_compute[297021]: 2025-10-05 09:52:02.931 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:02.999 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.000 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.235 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.238 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=12234MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.238 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.239 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.481 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.482 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.482 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.663 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.681 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.681 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.825 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.862 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:52:03 localhost nova_compute[297021]: 2025-10-05 09:52:03.900 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:52:04 localhost nova_compute[297021]: 2025-10-05 09:52:04.379 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:52:04 localhost nova_compute[297021]: 2025-10-05 09:52:04.387 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:52:04 localhost nova_compute[297021]: 2025-10-05 09:52:04.407 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:52:04 localhost nova_compute[297021]: 2025-10-05 09:52:04.410 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:52:04 localhost nova_compute[297021]: 2025-10-05 09:52:04.410 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:52:05 localhost nova_compute[297021]: 2025-10-05 09:52:05.407 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:05 localhost nova_compute[297021]: 2025-10-05 09:52:05.408 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:05 localhost nova_compute[297021]: 2025-10-05 09:52:05.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.772 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.773 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.774 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:52:06 localhost nova_compute[297021]: 2025-10-05 09:52:06.775 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:52:07 localhost nova_compute[297021]: 2025-10-05 09:52:07.118 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:52:07 localhost nova_compute[297021]: 2025-10-05 09:52:07.133 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:52:07 localhost nova_compute[297021]: 2025-10-05 09:52:07.134 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:52:07 localhost nova_compute[297021]: 2025-10-05 09:52:07.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:07 localhost podman[301374]: Oct 5 05:52:07 localhost podman[301374]: 2025-10-05 09:52:07.980340547 +0000 UTC m=+0.068724596 container create d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_goldwasser, release=553, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, RELEASE=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:52:08 localhost systemd[1]: Started libpod-conmon-d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee.scope. Oct 5 05:52:08 localhost systemd[1]: Started libcrun container. Oct 5 05:52:08 localhost podman[301374]: 2025-10-05 09:52:07.955455112 +0000 UTC m=+0.043839171 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:08 localhost podman[301374]: 2025-10-05 09:52:08.055963188 +0000 UTC m=+0.144347247 container init d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_goldwasser, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, version=7, distribution-scope=public, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Oct 5 05:52:08 localhost systemd[1]: tmp-crun.4tMjZm.mount: Deactivated successfully. Oct 5 05:52:08 localhost podman[301374]: 2025-10-05 09:52:08.066680067 +0000 UTC m=+0.155064116 container start d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_goldwasser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64) Oct 5 05:52:08 localhost podman[301374]: 2025-10-05 09:52:08.067011346 +0000 UTC m=+0.155395395 container attach d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_goldwasser, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, RELEASE=main, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Oct 5 05:52:08 localhost serene_goldwasser[301390]: 167 167 Oct 5 05:52:08 localhost systemd[1]: libpod-d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee.scope: Deactivated successfully. Oct 5 05:52:08 localhost podman[301374]: 2025-10-05 09:52:08.072225018 +0000 UTC m=+0.160609137 container died d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_goldwasser, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:52:08 localhost podman[301395]: 2025-10-05 09:52:08.183096115 +0000 UTC m=+0.089463847 container remove d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_goldwasser, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph) Oct 5 05:52:08 localhost systemd[1]: libpod-conmon-d03c807076b3d2133c08542f719f081a5dc0921dccc93c16e58efe82f2ef8bee.scope: Deactivated successfully. Oct 5 05:52:08 localhost systemd[1]: Reloading. Oct 5 05:52:08 localhost systemd-rc-local-generator[301434]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:08 localhost systemd-sysv-generator[301439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:08 localhost systemd[1]: var-lib-containers-storage-overlay-5ea0a7662b266027ed0963cf484af2f834a6e0f366a221d268fe039dfa565de0-merged.mount: Deactivated successfully. Oct 5 05:52:08 localhost systemd[1]: Reloading. Oct 5 05:52:08 localhost systemd-rc-local-generator[301477]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:08 localhost systemd-sysv-generator[301483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:09 localhost systemd[1]: Starting Ceph mgr.np0005471150.zwqxye for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:52:09 localhost podman[301542]: Oct 5 05:52:09 localhost podman[301542]: 2025-10-05 09:52:09.507554805 +0000 UTC m=+0.091462852 container create 90e7d6eb92abfea057cbe66af7ffa8ce56971babec6ee3c3b660e9364c03257c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye, vcs-type=git, io.buildah.version=1.33.12, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, com.redhat.component=rhceph-container, architecture=x86_64) Oct 5 05:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b13a649a9c10db038b0386eff6891f897cbd57e3ada4784a93ba682113f07a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b13a649a9c10db038b0386eff6891f897cbd57e3ada4784a93ba682113f07a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b13a649a9c10db038b0386eff6891f897cbd57e3ada4784a93ba682113f07a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b13a649a9c10db038b0386eff6891f897cbd57e3ada4784a93ba682113f07a1/merged/var/lib/ceph/mgr/ceph-np0005471150.zwqxye supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:09 localhost podman[301542]: 2025-10-05 09:52:09.469441041 +0000 UTC m=+0.053349148 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:09 localhost podman[301542]: 2025-10-05 09:52:09.575434046 +0000 UTC m=+0.159342103 container init 90e7d6eb92abfea057cbe66af7ffa8ce56971babec6ee3c3b660e9364c03257c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, io.openshift.expose-services=, name=rhceph, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, architecture=x86_64) Oct 5 05:52:09 localhost podman[301542]: 2025-10-05 09:52:09.586026643 +0000 UTC m=+0.169934700 container start 90e7d6eb92abfea057cbe66af7ffa8ce56971babec6ee3c3b660e9364c03257c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7) Oct 5 05:52:09 localhost bash[301542]: 90e7d6eb92abfea057cbe66af7ffa8ce56971babec6ee3c3b660e9364c03257c Oct 5 05:52:09 localhost systemd[1]: Started Ceph mgr.np0005471150.zwqxye for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:52:09 localhost ceph-mgr[301561]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:52:09 localhost ceph-mgr[301561]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Oct 5 05:52:09 localhost ceph-mgr[301561]: pidfile_write: ignore empty --pid-file Oct 5 05:52:09 localhost ceph-mgr[301561]: mgr[py] Loading python module 'alerts' Oct 5 05:52:09 localhost ceph-mgr[301561]: mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:52:09 localhost ceph-mgr[301561]: mgr[py] Loading python module 'balancer' Oct 5 05:52:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:09.759+0000 7fbc31165140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:52:09 localhost ceph-mgr[301561]: mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:52:09 localhost ceph-mgr[301561]: mgr[py] Loading python module 'cephadm' Oct 5 05:52:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:09.825+0000 7fbc31165140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:52:10 localhost ceph-mgr[301561]: mgr[py] Loading python module 'crash' Oct 5 05:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:52:10 localhost ceph-mgr[301561]: mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:52:10 localhost ceph-mgr[301561]: mgr[py] Loading python module 'dashboard' Oct 5 05:52:10 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:10.601+0000 7fbc31165140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:52:10 localhost systemd[1]: tmp-crun.HmAehr.mount: Deactivated successfully. Oct 5 05:52:10 localhost podman[301591]: 2025-10-05 09:52:10.696441568 +0000 UTC m=+0.100404964 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:52:10 localhost podman[301591]: 2025-10-05 09:52:10.726923875 +0000 UTC m=+0.130887251 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:52:10 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'devicehealth' Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'diskprediction_local' Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:11.224+0000 7fbc31165140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: from numpy import show_config as show_numpy_config Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'influx' Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:11.385+0000 7fbc31165140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'insights' Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:11.448+0000 7fbc31165140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost nova_compute[297021]: 2025-10-05 09:52:11.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'iostat' Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'k8sevents' Oct 5 05:52:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:11.565+0000 7fbc31165140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'localpool' Oct 5 05:52:11 localhost ceph-mgr[301561]: mgr[py] Loading python module 'mds_autoscaler' Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'mirroring' Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'nfs' Oct 5 05:52:12 localhost nova_compute[297021]: 2025-10-05 09:52:12.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'orchestrator' Oct 5 05:52:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:12.404+0000 7fbc31165140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:12.551+0000 7fbc31165140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'osd_perf_query' Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'osd_support' Oct 5 05:52:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:12.615+0000 7fbc31165140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'pg_autoscaler' Oct 5 05:52:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:12.670+0000 7fbc31165140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'progress' Oct 5 05:52:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:12.736+0000 7fbc31165140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:52:12 localhost ceph-mgr[301561]: mgr[py] Loading python module 'prometheus' Oct 5 05:52:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:12.797+0000 7fbc31165140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Loading python module 'rbd_support' Oct 5 05:52:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:13.092+0000 7fbc31165140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Loading python module 'restful' Oct 5 05:52:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:13.172+0000 7fbc31165140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Loading python module 'rgw' Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Loading python module 'rook' Oct 5 05:52:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:13.492+0000 7fbc31165140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:13.908+0000 7fbc31165140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Loading python module 'selftest' Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:13.969+0000 7fbc31165140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:52:13 localhost ceph-mgr[301561]: mgr[py] Loading python module 'snap_schedule' Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'stats' Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'status' Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'telegraf' Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:14.160+0000 7fbc31165140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'telemetry' Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:14.218+0000 7fbc31165140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'test_orchestrator' Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:14.347+0000 7fbc31165140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'volumes' Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:14.490+0000 7fbc31165140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:14.675+0000 7fbc31165140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Loading python module 'zabbix' Oct 5 05:52:14 localhost ceph-mgr[301561]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:52:14.732+0000 7fbc31165140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b0171e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Oct 5 05:52:14 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3986210712 Oct 5 05:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:52:14 localhost podman[301664]: 2025-10-05 09:52:14.963722168 +0000 UTC m=+0.092813628 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 05:52:15 localhost podman[301664]: 2025-10-05 09:52:15.005888381 +0000 UTC m=+0.134979861 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Oct 5 05:52:15 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:52:15 localhost podman[301763]: 2025-10-05 09:52:15.73627475 +0000 UTC m=+0.091315687 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:52:15 localhost podman[301763]: 2025-10-05 09:52:15.829791597 +0000 UTC m=+0.184832534 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, version=7, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, release=553, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux ) Oct 5 05:52:16 localhost nova_compute[297021]: 2025-10-05 09:52:16.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:52:17 localhost nova_compute[297021]: 2025-10-05 09:52:17.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:17 localhost podman[301901]: 2025-10-05 09:52:17.386137935 +0000 UTC m=+0.096636782 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 05:52:17 localhost podman[301901]: 2025-10-05 09:52:17.397863112 +0000 UTC m=+0.108361969 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:52:17 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:52:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:52:19 localhost podman[301974]: 2025-10-05 09:52:19.96775267 +0000 UTC m=+0.099225953 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=) Oct 5 05:52:19 localhost podman[301974]: 2025-10-05 09:52:19.986707693 +0000 UTC m=+0.118180906 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, version=9.6) Oct 5 05:52:20 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:52:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:52:20.451 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:52:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:52:20.452 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:52:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:52:20.453 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:52:21 localhost podman[248506]: time="2025-10-05T09:52:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:52:21 localhost podman[248506]: @ - - [05/Oct/2025:09:52:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 141657 "" "Go-http-client/1.1" Oct 5 05:52:21 localhost nova_compute[297021]: 2025-10-05 09:52:21.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:21 localhost podman[248506]: @ - - [05/Oct/2025:09:52:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18331 "" "Go-http-client/1.1" Oct 5 05:52:22 localhost openstack_network_exporter[250601]: ERROR 09:52:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:52:22 localhost openstack_network_exporter[250601]: ERROR 09:52:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:22 localhost openstack_network_exporter[250601]: ERROR 09:52:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:22 localhost openstack_network_exporter[250601]: ERROR 09:52:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:52:22 localhost openstack_network_exporter[250601]: Oct 5 05:52:22 localhost openstack_network_exporter[250601]: ERROR 09:52:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:52:22 localhost openstack_network_exporter[250601]: Oct 5 05:52:22 localhost nova_compute[297021]: 2025-10-05 09:52:22.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:52:25 localhost podman[302634]: 2025-10-05 09:52:25.671388055 +0000 UTC m=+0.083738442 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:52:25 localhost podman[302634]: 2025-10-05 09:52:25.683639067 +0000 UTC m=+0.095989444 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:52:25 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:52:26 localhost nova_compute[297021]: 2025-10-05 09:52:26.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:27 localhost nova_compute[297021]: 2025-10-05 09:52:27.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:52:29 localhost podman[302675]: 2025-10-05 09:52:29.660763976 +0000 UTC m=+0.073275138 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:52:29 localhost podman[302675]: 2025-10-05 09:52:29.66792188 +0000 UTC m=+0.080433092 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:52:29 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:52:31 localhost nova_compute[297021]: 2025-10-05 09:52:31.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:31 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b0171e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Oct 5 05:52:32 localhost nova_compute[297021]: 2025-10-05 09:52:32.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:52:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:52:33 localhost systemd[1]: tmp-crun.T0soDX.mount: Deactivated successfully. Oct 5 05:52:33 localhost podman[302697]: 2025-10-05 09:52:33.696899867 +0000 UTC m=+0.100347561 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:52:33 localhost podman[302697]: 2025-10-05 09:52:33.730322834 +0000 UTC m=+0.133770488 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 05:52:33 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:52:33 localhost podman[302698]: 2025-10-05 09:52:33.791600306 +0000 UTC m=+0.190589240 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 05:52:33 localhost podman[302698]: 2025-10-05 09:52:33.808871204 +0000 UTC m=+0.207860188 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 05:52:33 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:52:35 localhost ceph-mds[300279]: mds.beacon.mds.np0005471150.bsiqok missed beacon ack from the monitors Oct 5 05:52:36 localhost nova_compute[297021]: 2025-10-05 09:52:36.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:37 localhost nova_compute[297021]: 2025-10-05 09:52:37.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:37 localhost podman[302812]: Oct 5 05:52:37 localhost podman[302812]: 2025-10-05 09:52:37.55681269 +0000 UTC m=+0.084294397 container create f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_shtern, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main) Oct 5 05:52:37 localhost systemd[1]: Started libpod-conmon-f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be.scope. Oct 5 05:52:37 localhost podman[302812]: 2025-10-05 09:52:37.522536051 +0000 UTC m=+0.050017778 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:37 localhost systemd[1]: Started libcrun container. Oct 5 05:52:37 localhost podman[302812]: 2025-10-05 09:52:37.651918759 +0000 UTC m=+0.179400466 container init f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_shtern, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, release=553, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container) Oct 5 05:52:37 localhost podman[302812]: 2025-10-05 09:52:37.66300057 +0000 UTC m=+0.190482277 container start f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_shtern, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:52:37 localhost podman[302812]: 2025-10-05 09:52:37.663309628 +0000 UTC m=+0.190791375 container attach f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_shtern, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main) Oct 5 05:52:37 localhost recursing_shtern[302827]: 167 167 Oct 5 05:52:37 localhost systemd[1]: libpod-f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be.scope: Deactivated successfully. Oct 5 05:52:37 localhost podman[302812]: 2025-10-05 09:52:37.666453534 +0000 UTC m=+0.193935291 container died f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_shtern, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55) Oct 5 05:52:37 localhost podman[302832]: 2025-10-05 09:52:37.784816504 +0000 UTC m=+0.103549079 container remove f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_shtern, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:52:37 localhost systemd[1]: libpod-conmon-f85c1638ae7c702bf85a7c68cd54cdedafdb253c00164400ce49e18dfb6207be.scope: Deactivated successfully. Oct 5 05:52:37 localhost podman[302849]: Oct 5 05:52:37 localhost podman[302849]: 2025-10-05 09:52:37.909250649 +0000 UTC m=+0.084936195 container create 4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container) Oct 5 05:52:37 localhost systemd[1]: Started libpod-conmon-4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5.scope. Oct 5 05:52:37 localhost systemd[1]: Started libcrun container. Oct 5 05:52:37 localhost podman[302849]: 2025-10-05 09:52:37.876462659 +0000 UTC m=+0.052148215 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06e3ed91c12b05bc159aec85130a2034cdd45e48d0d9ed4b6068ccf6fb1daa0/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06e3ed91c12b05bc159aec85130a2034cdd45e48d0d9ed4b6068ccf6fb1daa0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06e3ed91c12b05bc159aec85130a2034cdd45e48d0d9ed4b6068ccf6fb1daa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06e3ed91c12b05bc159aec85130a2034cdd45e48d0d9ed4b6068ccf6fb1daa0/merged/var/lib/ceph/mon/ceph-np0005471150 supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:37 localhost podman[302849]: 2025-10-05 09:52:37.99263892 +0000 UTC m=+0.168324466 container init 4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_snyder, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, vcs-type=git) Oct 5 05:52:38 localhost podman[302849]: 2025-10-05 09:52:38.00224205 +0000 UTC m=+0.177927606 container start 4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main) Oct 5 05:52:38 localhost podman[302849]: 2025-10-05 09:52:38.002581119 +0000 UTC m=+0.178266665 container attach 4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_snyder, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, distribution-scope=public, release=553, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:52:38 localhost systemd[1]: libpod-4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5.scope: Deactivated successfully. Oct 5 05:52:38 localhost podman[302849]: 2025-10-05 09:52:38.10287644 +0000 UTC m=+0.278561996 container died 4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_snyder, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container) Oct 5 05:52:38 localhost podman[302890]: 2025-10-05 09:52:38.207892168 +0000 UTC m=+0.092747216 container remove 4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_snyder, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_BRANCH=main, version=7, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:52:38 localhost systemd[1]: libpod-conmon-4d5925e3e77923ce077f3ba6695bd80892e043815262cd09bbf68c03a47285b5.scope: Deactivated successfully. Oct 5 05:52:38 localhost systemd[1]: Reloading. Oct 5 05:52:38 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b016f20 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Oct 5 05:52:38 localhost systemd-rc-local-generator[302931]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:38 localhost systemd-sysv-generator[302935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:38 localhost systemd[1]: var-lib-containers-storage-overlay-82784e23515133d5531c40db1318c0ac6604211e0976332331365f9dfa13c78b-merged.mount: Deactivated successfully. Oct 5 05:52:38 localhost systemd[1]: Reloading. Oct 5 05:52:38 localhost systemd-sysv-generator[302971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:38 localhost systemd-rc-local-generator[302967]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.835 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.837 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:52:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.858 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.859 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0f651621-cb74-4d80-8992-7f70ff28c797', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.837455', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07251b08-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': 'd3bb553734015fb13d5d48fabc6af5920d1e266370edcba16b558f01be66a035'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.837455', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07252fd0-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': '87ed95e1cdef8c299a256487f7a8bd5f1a7568da63c65a657a16fdf2999b8e1d'}]}, 'timestamp': '2025-10-05 09:52:38.860205', '_unique_id': '0b61b5f374f9470082470e7e1d4822a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.862 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.863 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.867 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7786f4f3-63ae-4a4e-ab33-e7a57944988f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.864111', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '072665bc-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '5909706a5d550bdfc29912ee34e01a981219e66a5721ea558d950d1b7fb068d4'}]}, 'timestamp': '2025-10-05 09:52:38.868152', '_unique_id': 'd28014eec67d4713885806ecb335b7fc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.869 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.871 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.871 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ed665c88-757f-4473-b614-1b008b774a1e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.871290', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '0726f450-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': 'c94dac311a669f18d2ec555f5ab1a557077bd8aae4b55e453731dcc0eac83f6e'}]}, 'timestamp': '2025-10-05 09:52:38.871798', '_unique_id': '10b3e837cd1c4aab90b8a2218937b3cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.873 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.873 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '853def6e-1ac7-46ea-9eb8-3b08ffe01ce4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.873922', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07275a26-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': 'e329f73282cb7f65c27ec398891cbf9feb3c9192b106ba2a93438c49c42c058b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.873922', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07276c1e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': 'b4ec43261c21d3909ff163ec06cbf3a720a49ffc31b511ab495ea5439980fd4c'}]}, 'timestamp': '2025-10-05 09:52:38.874831', '_unique_id': '9167319b477e47f18503d26c01bcaebb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.877 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1f48d25b-c2a4-4e55-ae61-070841c6a7e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.877242', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '0727dc8a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '081bfd7803fe7c4c794e6651a25741727881db38d3c7f04230c6502b9c5632bf'}]}, 'timestamp': '2025-10-05 09:52:38.877740', '_unique_id': 'ee32eead52f44c3a97bb57737bafae65'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.878 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.880 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.880 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d4d9e24-fc07-4814-9a35-18ac02ff8822', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.880137', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '07284c60-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '3c0098df5542d293b05a38fb8025dee6710f8dcc6d3daba692b635219114f52c'}]}, 'timestamp': '2025-10-05 09:52:38.880641', '_unique_id': '88f26421f473429eb689e0e1fe998e73'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.881 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.882 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97c5966e-5c49-4f8b-9f73-77bc0c876bda', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.882739', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '0728b1aa-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '7341efbd8cf1b1326bf88495c660541fe09144bd11360e8242a2f182005a5235'}]}, 'timestamp': '2025-10-05 09:52:38.883191', '_unique_id': '48c1d87abf5b47d39db25665bae2e848'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.884 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.885 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.885 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.885 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '64e2b862-1a85-4f80-b451-33e8aabb27f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.885252', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07291500-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': '7ae75818488e1bcf8ab6a4cf27ca247043cf0b4aed5c26b5beed3506b0c4f86d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.885252', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0729254a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': 'aab18bf2269adb541e7b94617e646ab798e30948669684c72f78e1c1a3d62d95'}]}, 'timestamp': '2025-10-05 09:52:38.886145', '_unique_id': '76f65c576d9f4eeeacb862cdb66654b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.887 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.888 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b9cbe8ab-bc5e-4eeb-9da6-0c483dae72a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.888261', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '07298a94-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '990250822791ed802a5f43f2c7d69a31ce1d076d5c7c9dd0a52d8ac50e695529'}]}, 'timestamp': '2025-10-05 09:52:38.888745', '_unique_id': '83839c8124344685800453f60f84c88a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.890 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 12440000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1a8bd4f-352a-4967-83be-b850e551d3ec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12440000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:52:38.891045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '072c6764-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.130761876, 'message_signature': '04c09bf0b263baa570cf3db3f9449ac7f56887769878282e1f72812f1bb3821c'}]}, 'timestamp': '2025-10-05 09:52:38.907548', '_unique_id': '598fa341fb754ccf9cbe3767d0864a57'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.909 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.910 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a570f972-8b8f-42ce-8f1e-c161c61b62d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.909815', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '072cd4c4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': '5e55b752589627111e770a3a26dc4b27c00d0f3ad2be8d8633f1ed274c1381e9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.909815', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '072ce6bc-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': 'edf2a83409dd6226483917e3baac47a6b744109062e0b6163ea64830adb399bd'}]}, 'timestamp': '2025-10-05 09:52:38.910739', '_unique_id': '789a17f55fb941618c44107b7a10da72'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.911 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.912 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b759a00b-a981-4c09-8bef-afb5e7b35b3e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.912996', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '072ee1d8-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.136908872, 'message_signature': 'a0663824179893fc757f9a434ef479ab574fc73c0dbf2145a73c65223e5de9ba'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.912996', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '072ef286-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.136908872, 'message_signature': 'f4f4495153a3388f1f6bb2fdc00be7d4a5a0965f286805ac6b186a7bceb36c10'}]}, 'timestamp': '2025-10-05 09:52:38.924146', '_unique_id': '45987fe4587149398695f29453477d16'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.926 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d0ceb62-580a-4f28-ad5c-e6f140ce8a9f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:52:38.926425', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '072f5c80-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.130761876, 'message_signature': '2705a3f2c1967604a2538343c008705e1e3b1583dc90be9bef6ccf6fe314319c'}]}, 'timestamp': '2025-10-05 09:52:38.926873', '_unique_id': 'ba91526ac3d748368fb27af2c98b3943'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.928 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14795e07-189e-4c36-b237-c71f2fae30cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.929013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '072fc120-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': '1100976c20176deebcdebbc56acbf0a25d9eab00b857e72a70e47c97cd6d21ae'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.929013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '072fd2be-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': '8b5259bc41ccc7469cd4f7c6d61dc1acc3a0e4cc3a52202f7c3ae3335160e864'}]}, 'timestamp': '2025-10-05 09:52:38.929882', '_unique_id': 'f979d6075782403288e710dd104f0eb0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.931 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.932 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6361e4c4-8e2a-4866-a64b-3fe7e7ed4c39', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.932219', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07303fc4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': '02ec49bd63048badc835ea776cb903d1cef2b7d982ba9d3eb0a7755f9f798de6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.932219', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07304fdc-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.061380785, 'message_signature': 'ffbc842cc8efe7abb064f15390ebbc4efb7b29d795d12dd87f299b74d9f74645'}]}, 'timestamp': '2025-10-05 09:52:38.933084', '_unique_id': 'de714d78c22545ce9d55c719c49ba3da'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.935 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd12070f6-da43-48de-817e-ebb0aaea8d01', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.935237', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0730b54e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.136908872, 'message_signature': 'c361251ef7e8e974d28919006d234ca3bd238969d4ec09fd600b7920b8eb4bcb'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.935237', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0730c55c-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.136908872, 'message_signature': '213ffafc0d8da66e465a5be523465a5de6d7028f4cecc68b6ae3bad5ad42c787'}]}, 'timestamp': '2025-10-05 09:52:38.936098', '_unique_id': '74d0f362760c4de39ced68835e78e15d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ecff25f-86d8-412d-adfb-50f5e3305398', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.938209', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '073129f2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '98c00a28d932554f5dc0d090f1e90430c59e4f826f462eace48488fa64fc80a7'}]}, 'timestamp': '2025-10-05 09:52:38.938700', '_unique_id': '35b5f857d94c489a95bcd5f291ab8676'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.940 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa45d043-f678-4597-93b7-260a01d6f4bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.940748', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '07318b72-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': 'b01694fbb9481b4af60cbcfb7df7835002a3c36727659dbe6c66c8a85ed6dfa6'}]}, 'timestamp': '2025-10-05 09:52:38.941192', '_unique_id': '98f1e6bf0f3a40499cd78917ec5471fc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '49e7521e-b2ab-429c-9029-ce0d262af125', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:52:38.943556', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0731f918-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.136908872, 'message_signature': '06c4c82c9f47147b5d354283d92699b2cef7dfd14d8fbf89726f16483f62820f'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:52:38.943556', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '073208fe-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.136908872, 'message_signature': '8c6f719cd32fcff4dc3785e1c556c1f22036b581d6492b0c159bb3f610cf991a'}]}, 'timestamp': '2025-10-05 09:52:38.944378', '_unique_id': '291726828fd4428c80858ecfc5b3da0b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.946 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a73653de-3254-472f-8f39-1867903f8035', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.946522', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '07326d94-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '90753a0927af95a550f7c742fb2e77b0b7c8fde4923ffd34fcb7cd138106b4f1'}]}, 'timestamp': '2025-10-05 09:52:38.946930', '_unique_id': 'fd9371ec5aef4c92ba372744fa536763'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.948 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b143ee38-2f6b-41d4-ae1f-dc50ed8617b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:52:38.948178', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '0732aa84-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11438.088027857, 'message_signature': '903216a7ccd9317b04b8b62f95adee908fd0697cf0a85474c52450e084948a5b'}]}, 'timestamp': '2025-10-05 09:52:38.948479', '_unique_id': 'aab3042eb57d42d0aa2023622e48d1c9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:52:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:52:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:52:39 localhost systemd[1]: Starting Ceph mon.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:52:39 localhost podman[303033]: Oct 5 05:52:39 localhost podman[303033]: 2025-10-05 09:52:39.430627409 +0000 UTC m=+0.089122288 container create e61f9f47ddc8ce6afb93939e90f8e168344b6d36c18d946619f6270f250fee30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, com.redhat.component=rhceph-container) Oct 5 05:52:39 localhost systemd[1]: tmp-crun.XBEfd4.mount: Deactivated successfully. Oct 5 05:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a78ab956b03a2182105dc1635b23e472806302a7dfce9362c3d857d43d91f75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a78ab956b03a2182105dc1635b23e472806302a7dfce9362c3d857d43d91f75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a78ab956b03a2182105dc1635b23e472806302a7dfce9362c3d857d43d91f75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a78ab956b03a2182105dc1635b23e472806302a7dfce9362c3d857d43d91f75/merged/var/lib/ceph/mon/ceph-np0005471150 supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:39 localhost podman[303033]: 2025-10-05 09:52:39.490530134 +0000 UTC m=+0.149025013 container init e61f9f47ddc8ce6afb93939e90f8e168344b6d36c18d946619f6270f250fee30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , architecture=x86_64, release=553, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:52:39 localhost podman[303033]: 2025-10-05 09:52:39.392515936 +0000 UTC m=+0.051010885 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:39 localhost podman[303033]: 2025-10-05 09:52:39.499890617 +0000 UTC m=+0.158385496 container start e61f9f47ddc8ce6afb93939e90f8e168344b6d36c18d946619f6270f250fee30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:52:39 localhost bash[303033]: e61f9f47ddc8ce6afb93939e90f8e168344b6d36c18d946619f6270f250fee30 Oct 5 05:52:39 localhost systemd[1]: Started Ceph mon.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:52:39 localhost ceph-mon[303051]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:52:39 localhost ceph-mon[303051]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Oct 5 05:52:39 localhost ceph-mon[303051]: pidfile_write: ignore empty --pid-file Oct 5 05:52:39 localhost ceph-mon[303051]: load: jerasure load: lrc Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: RocksDB version: 7.9.2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Git sha 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: DB SUMMARY Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: DB Session ID: 4BBLVF8P1PRA7DXQPICM Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: CURRENT file: CURRENT Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: IDENTITY file: IDENTITY Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005471150/store.db dir, Total Num: 0, files: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005471150/store.db: 000004.log size: 886 ; Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.error_if_exists: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.create_if_missing: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.paranoid_checks: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.env: 0x5585ea4809e0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.fs: PosixFileSystem Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.info_log: 0x5585eca9ed20 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.statistics: (nil) Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.use_fsync: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_log_file_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.allow_fallocate: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.use_direct_reads: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.create_missing_column_families: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.db_log_dir: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.wal_dir: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.advise_random_on_open: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.write_buffer_manager: 0x5585ecaaf540 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.rate_limiter: (nil) Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.unordered_write: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.row_cache: None Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.wal_filter: None Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.two_write_queues: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.manual_wal_flush: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.wal_compression: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.atomic_flush: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.log_readahead_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.db_host_id: __hostname__ Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_background_jobs: 2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_background_compactions: -1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_subcompactions: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_total_wal_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_open_files: -1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bytes_per_sync: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_readahead_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_background_flushes: -1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Compression algorithms supported: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kZSTD supported: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kXpressCompression supported: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kZlibCompression supported: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005471150/store.db/MANIFEST-000005 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.merge_operator: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_filter: None Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_filter_factory: None Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.sst_partitioner_factory: None Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5585eca9e980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5585eca9b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.write_buffer_size: 33554432 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_write_buffer_number: 2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression: NoCompression Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression: Disabled Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.prefix_extractor: nullptr Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.num_levels: 7 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.level: 32767 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.enabled: false Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_base: 268435456 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.arena_block_size: 1048576 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.table_properties_collectors: Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.inplace_update_support: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.bloom_locality: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.max_successive_merges: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.force_consistency_checks: 1 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.ttl: 2592000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.enable_blob_files: false Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.min_blob_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.blob_file_size: 268435456 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005471150/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1247a0db-c002-4733-be87-1ce878bf8886 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657959561748, "job": 1, "event": "recovery_started", "wal_files": [4]} Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657959565240, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657959, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1247a0db-c002-4733-be87-1ce878bf8886", "db_session_id": "4BBLVF8P1PRA7DXQPICM", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657959565493, "job": 1, "event": "recovery_finished"} Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5585ecac2e00 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: DB pointer 0x5585ecbb8000 Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:52:39 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5585eca9b350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150 does not exist in monmap, will attempt to join an existing cluster Oct 5 05:52:39 localhost ceph-mon[303051]: using public_addr v2:172.18.0.106:0/0 -> [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] Oct 5 05:52:39 localhost ceph-mon[303051]: starting mon.np0005471150 rank -1 at public addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] at bind addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005471150 fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(???) e0 preinit fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing) e5 sync_obtain_latest_monmap Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing) e5 sync_obtain_latest_monmap obtained monmap e5 Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).mds e16 new map Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-05T08:04:17.819317+0000#012modified#0112025-10-05T09:51:24.604984+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01180#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26863}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26863 members: 26863#012[mds.mds.np0005471152.pozuqw{0:26863} state up:active seq 14 addr [v2:172.18.0.108:6808/114949388,v1:172.18.0.108:6809/114949388] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005471151.uyxcpj{-1:17211} state up:standby seq 1 addr [v2:172.18.0.107:6808/3905827397,v1:172.18.0.107:6809/3905827397] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005471150.bsiqok{-1:17217} state up:standby seq 1 addr [v2:172.18.0.106:6808/1854153836,v1:172.18.0.106:6809/1854153836] compat {c=[1],r=[1],i=[17ff]}] Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).osd e81 crush map has features 3314933000854323200, adjusting msgr requires Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).osd e81 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).osd e81 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).osd e81 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:52:39 localhost ceph-mon[303051]: Removing key for mds.mds.np0005471148.dhrare Oct 5 05:52:39 localhost ceph-mon[303051]: Removing daemon mds.mds.np0005471147.whcunt from np0005471147.localdomain -- ports [] Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005471147.whcunt"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005471147.whcunt"}]': finished Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Removing key for mds.mds.np0005471147.whcunt Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mgr to host np0005471150.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mgr to host np0005471151.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mgr to host np0005471152.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 5 05:52:39 localhost ceph-mon[303051]: Saving service mgr spec with placement label:mgr Oct 5 05:52:39 localhost ceph-mon[303051]: Deploying daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Deploying daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mon to host np0005471146.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 5 05:52:39 localhost ceph-mon[303051]: Added label _admin to host np0005471146.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: Deploying daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mon to host np0005471147.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label _admin to host np0005471147.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mon to host np0005471148.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label _admin to host np0005471148.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mon to host np0005471150.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label _admin to host np0005471150.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mon to host np0005471151.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label _admin to host np0005471151.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:39 localhost ceph-mon[303051]: Added label mon to host np0005471152.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Added label _admin to host np0005471152.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:39 localhost ceph-mon[303051]: Saving service mon spec with placement label:mon Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:52:39 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: Deploying daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: Deploying daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471146 calling monitor election Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471148 calling monitor election Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471147 calling monitor election Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471152 calling monitor election Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471146 is new leader, mons np0005471146,np0005471148,np0005471147,np0005471152 in quorum (ranks 0,1,2,3) Oct 5 05:52:39 localhost ceph-mon[303051]: overall HEALTH_OK Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:39 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:39 localhost ceph-mon[303051]: Deploying daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:52:39 localhost ceph-mon[303051]: mon.np0005471150@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Oct 5 05:52:41 localhost nova_compute[297021]: 2025-10-05 09:52:41.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:52:41 localhost podman[303090]: 2025-10-05 09:52:41.680256959 +0000 UTC m=+0.087205486 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:52:41 localhost podman[303090]: 2025-10-05 09:52:41.690905528 +0000 UTC m=+0.097854055 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001) Oct 5 05:52:41 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:52:42 localhost nova_compute[297021]: 2025-10-05 09:52:42.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:43 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b0171e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Oct 5 05:52:45 localhost systemd[1]: tmp-crun.v5wpcL.mount: Deactivated successfully. Oct 5 05:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:52:45 localhost podman[303236]: 2025-10-05 09:52:45.186567313 +0000 UTC m=+0.116841150 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container) Oct 5 05:52:45 localhost systemd[1]: tmp-crun.AQ1q1O.mount: Deactivated successfully. Oct 5 05:52:45 localhost podman[303236]: 2025-10-05 09:52:45.307836471 +0000 UTC m=+0.238110298 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, release=553, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Oct 5 05:52:45 localhost podman[303257]: 2025-10-05 09:52:45.306815384 +0000 UTC m=+0.111866615 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:52:45 localhost podman[303257]: 2025-10-05 09:52:45.392899789 +0000 UTC m=+0.197951000 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:52:45 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:52:45 localhost ceph-mon[303051]: mon.np0005471150@-1(probing) e6 my rank is now 5 (was -1) Oct 5 05:52:45 localhost ceph-mon[303051]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:52:45 localhost ceph-mon[303051]: paxos.5).electionLogic(0) init, first boot, initializing epoch at 1 Oct 5 05:52:45 localhost ceph-mon[303051]: mon.np0005471150@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:46 localhost nova_compute[297021]: 2025-10-05 09:52:46.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:47 localhost nova_compute[297021]: 2025-10-05 09:52:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:52:47 localhost podman[303386]: 2025-10-05 09:52:47.695378772 +0000 UTC m=+0.090344271 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute) Oct 5 05:52:47 localhost podman[303386]: 2025-10-05 09:52:47.710813531 +0000 UTC m=+0.105779000 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 05:52:47 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:52:47 localhost ceph-mds[300279]: mds.beacon.mds.np0005471150.bsiqok missed beacon ack from the monitors Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471150@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471150@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471150@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471146 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471148 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471147 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471152 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471151 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471146 is new leader, mons np0005471146,np0005471148,np0005471147,np0005471152,np0005471151 in quorum (ranks 0,1,2,3,4) Oct 5 05:52:48 localhost ceph-mon[303051]: overall HEALTH_OK Oct 5 05:52:48 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:48 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471150@5(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:48 localhost ceph-mon[303051]: mgrc update_daemon_metadata mon.np0005471150 metadata {addrs=[v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005471150.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005471150.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471146 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471148 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471152 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471147 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471151 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471150 calling monitor election Oct 5 05:52:48 localhost ceph-mon[303051]: mon.np0005471146 is new leader, mons np0005471146,np0005471148,np0005471147,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3,4,5) Oct 5 05:52:48 localhost ceph-mon[303051]: overall HEALTH_OK Oct 5 05:52:48 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:48 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:52:50 localhost podman[303491]: 2025-10-05 09:52:50.192049343 +0000 UTC m=+0.092207892 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Oct 5 05:52:50 localhost podman[303491]: 2025-10-05 09:52:50.235854101 +0000 UTC m=+0.136012710 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter) Oct 5 05:52:50 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:52:50 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:50 localhost ceph-mon[303051]: Updating np0005471146.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:51 localhost podman[248506]: time="2025-10-05T09:52:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:52:51 localhost podman[248506]: @ - - [05/Oct/2025:09:52:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:52:51 localhost podman[248506]: @ - - [05/Oct/2025:09:52:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18821 "" "Go-http-client/1.1" Oct 5 05:52:51 localhost nova_compute[297021]: 2025-10-05 09:52:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:52 localhost ceph-mon[303051]: Updating np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost openstack_network_exporter[250601]: ERROR 09:52:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:52:52 localhost openstack_network_exporter[250601]: ERROR 09:52:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:52 localhost openstack_network_exporter[250601]: ERROR 09:52:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:52 localhost openstack_network_exporter[250601]: ERROR 09:52:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:52:52 localhost openstack_network_exporter[250601]: Oct 5 05:52:52 localhost openstack_network_exporter[250601]: ERROR 09:52:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:52:52 localhost openstack_network_exporter[250601]: Oct 5 05:52:52 localhost nova_compute[297021]: 2025-10-05 09:52:52.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:53 localhost ceph-mon[303051]: Reconfiguring mon.np0005471146 (monmap changed)... Oct 5 05:52:53 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:53 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471146 on np0005471146.localdomain Oct 5 05:52:53 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471146.xqzesq (monmap changed)... Oct 5 05:52:54 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471146.xqzesq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:54 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471146.xqzesq on np0005471146.localdomain Oct 5 05:52:54 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471146.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:52:55 localhost ceph-mon[303051]: Reconfiguring crash.np0005471146 (monmap changed)... Oct 5 05:52:55 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471146 on np0005471146.localdomain Oct 5 05:52:55 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:55 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:55 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:52:56 localhost ceph-mon[303051]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:52:56 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:52:56 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:56 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:56 localhost ceph-mon[303051]: Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:52:56 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:56 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:52:56 localhost nova_compute[297021]: 2025-10-05 09:52:56.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:52:56 localhost podman[303831]: 2025-10-05 09:52:56.702638254 +0000 UTC m=+0.084642997 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:52:56 localhost podman[303831]: 2025-10-05 09:52:56.741935269 +0000 UTC m=+0.123940002 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:52:56 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:52:57 localhost ceph-mon[303051]: mon.np0005471150@5(peon).osd e81 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Oct 5 05:52:57 localhost ceph-mon[303051]: mon.np0005471150@5(peon).osd e81 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Oct 5 05:52:57 localhost ceph-mon[303051]: mon.np0005471150@5(peon).osd e82 e82: 6 total, 6 up, 6 in Oct 5 05:52:57 localhost nova_compute[297021]: 2025-10-05 09:52:57.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:52:57 localhost systemd[1]: session-27.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-27.scope: Consumed 3min 37.187s CPU time. Oct 5 05:52:57 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd[1]: session-20.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-19.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-21.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd[1]: session-18.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-23.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-15.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-25.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-26.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-24.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-17.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-22.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 27. Oct 5 05:52:57 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 15 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 20. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 19. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 21. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 18. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 23. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 15. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 25. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 26. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 24. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 17. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 22. Oct 5 05:52:57 localhost sshd[303854]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:52:57 localhost systemd-logind[760]: New session 68 of user ceph-admin. Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:57 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:57 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:52:57 localhost ceph-mon[303051]: from='client.? 172.18.0.103:0/3926179074' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:52:57 localhost ceph-mon[303051]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:52:57 localhost ceph-mon[303051]: Activating manager daemon np0005471151.jecxod Oct 5 05:52:57 localhost ceph-mon[303051]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:52:57 localhost ceph-mon[303051]: Manager daemon np0005471151.jecxod is now available Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:52:57 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:52:57 localhost systemd[1]: Started Session 68 of User ceph-admin. Oct 5 05:52:59 localhost podman[303966]: 2025-10-05 09:52:59.06974411 +0000 UTC m=+0.094376031 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, CEPH_POINT_RELEASE=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:52:59 localhost podman[303966]: 2025-10-05 09:52:59.187896734 +0000 UTC m=+0.212528605 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , release=553, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main) Oct 5 05:52:59 localhost ceph-mon[303051]: mon.np0005471150@5(peon).osd e82 _set_new_cache_sizes cache_size:1019702816 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:52:59 localhost podman[304071]: 2025-10-05 09:52:59.855666954 +0000 UTC m=+0.096512588 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:52:59 localhost podman[304071]: 2025-10-05 09:52:59.870747274 +0000 UTC m=+0.111592958 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:52:59 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:53:00 localhost ceph-mon[303051]: [05/Oct/2025:09:52:59] ENGINE Bus STARTING Oct 5 05:53:00 localhost ceph-mon[303051]: [05/Oct/2025:09:52:59] ENGINE Serving on http://172.18.0.107:8765 Oct 5 05:53:00 localhost ceph-mon[303051]: [05/Oct/2025:09:52:59] ENGINE Serving on https://172.18.0.107:7150 Oct 5 05:53:00 localhost ceph-mon[303051]: [05/Oct/2025:09:52:59] ENGINE Bus STARTED Oct 5 05:53:00 localhost ceph-mon[303051]: [05/Oct/2025:09:52:59] ENGINE Client ('172.18.0.107', 36886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:01 localhost nova_compute[297021]: 2025-10-05 09:53:01.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:01 localhost nova_compute[297021]: 2025-10-05 09:53:01.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471146", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471146", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:53:02 localhost ceph-mon[303051]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:53:02 localhost ceph-mon[303051]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:53:02 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:02 localhost ceph-mon[303051]: Updating np0005471146.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost nova_compute[297021]: 2025-10-05 09:53:02.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:03 localhost ceph-mon[303051]: Updating np0005471146.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:03 localhost nova_compute[297021]: 2025-10-05 09:53:03.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:03 localhost nova_compute[297021]: 2025-10-05 09:53:03.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:03 localhost nova_compute[297021]: 2025-10-05 09:53:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:03 localhost nova_compute[297021]: 2025-10-05 09:53:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:03 localhost nova_compute[297021]: 2025-10-05 09:53:03.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:53:03 localhost podman[304712]: 2025-10-05 09:53:03.937067343 +0000 UTC m=+0.088996745 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 05:53:03 localhost podman[304711]: 2025-10-05 09:53:03.993682108 +0000 UTC m=+0.145953229 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:53:04 localhost podman[304712]: 2025-10-05 09:53:04.001751417 +0000 UTC m=+0.153680819 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:53:04 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:53:04 localhost podman[304711]: 2025-10-05 09:53:04.029254093 +0000 UTC m=+0.181525214 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:53:04 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.452 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.453 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.453 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.454 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.454 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:53:04 localhost ceph-mon[303051]: mon.np0005471150@5(peon).osd e82 _set_new_cache_sizes cache_size:1020047019 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:04 localhost ceph-mon[303051]: mon.np0005471150@5(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:53:04 localhost ceph-mon[303051]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/191857328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.923 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:53:04 localhost nova_compute[297021]: 2025-10-05 09:53:04.999 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:04.999 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.180 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.182 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11792MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.182 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.182 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:05 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.275 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.276 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.276 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.315 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:53:05 localhost ceph-mon[303051]: mon.np0005471150@5(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:53:05 localhost ceph-mon[303051]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1578069989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.773 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.781 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.798 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.800 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:53:05 localhost nova_compute[297021]: 2025-10-05 09:53:05.801 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:53:06 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:53:06 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:53:06 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:06 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:06 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:06 localhost nova_compute[297021]: 2025-10-05 09:53:06.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:06 localhost nova_compute[297021]: 2025-10-05 09:53:06.797 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:06 localhost nova_compute[297021]: 2025-10-05 09:53:06.816 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:06 localhost nova_compute[297021]: 2025-10-05 09:53:06.817 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:53:06 localhost nova_compute[297021]: 2025-10-05 09:53:06.817 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.063836) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987063949, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10889, "num_deletes": 512, "total_data_size": 16196245, "memory_usage": 16967104, "flush_reason": "Manual Compaction"} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987180213, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11709878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10894, "table_properties": {"data_size": 11655722, "index_size": 28834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24261, "raw_key_size": 255654, "raw_average_key_size": 26, "raw_value_size": 11491131, "raw_average_value_size": 1185, "num_data_blocks": 1088, "num_entries": 9694, "num_filter_entries": 9694, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657959, "oldest_key_time": 1759657959, "file_creation_time": 1759657987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1247a0db-c002-4733-be87-1ce878bf8886", "db_session_id": "4BBLVF8P1PRA7DXQPICM", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 116446 microseconds, and 26885 cpu microseconds. Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.180287) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11709878 bytes OK Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.180315) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.188445) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.188466) EVENT_LOG_v1 {"time_micros": 1759657987188460, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.188486) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 16122366, prev total WAL file size 16122366, number of live WAL files 2. Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.191366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(2012B)] Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987191447, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11711890, "oldest_snapshot_seqno": -1} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9187 keys, 11702668 bytes, temperature: kUnknown Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987257114, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11702668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11649842, "index_size": 28789, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 247278, "raw_average_key_size": 26, "raw_value_size": 11491786, "raw_average_value_size": 1250, "num_data_blocks": 1087, "num_entries": 9187, "num_filter_entries": 9187, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657959, "oldest_key_time": 0, "file_creation_time": 1759657987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1247a0db-c002-4733-be87-1ce878bf8886", "db_session_id": "4BBLVF8P1PRA7DXQPICM", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.257439) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11702668 bytes Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.259124) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.1 rd, 177.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.2, 0.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9699, records dropped: 512 output_compression: NoCompression Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.259153) EVENT_LOG_v1 {"time_micros": 1759657987259141, "job": 4, "event": "compaction_finished", "compaction_time_micros": 65773, "compaction_time_cpu_micros": 23776, "output_level": 6, "num_output_files": 1, "total_output_size": 11702668, "num_input_records": 9699, "num_output_records": 9187, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987260862, "job": 4, "event": "table_file_deletion", "file_number": 14} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987260965, "job": 4, "event": "table_file_deletion", "file_number": 8} Oct 5 05:53:07 localhost ceph-mon[303051]: rocksdb: (Original Log Time 2025/10/05-09:53:07.191234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:53:07 localhost ceph-mon[303051]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:53:07 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:53:07 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:07 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:07 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:07 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:07 localhost nova_compute[297021]: 2025-10-05 09:53:07.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:07 localhost nova_compute[297021]: 2025-10-05 09:53:07.409 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:53:07 localhost nova_compute[297021]: 2025-10-05 09:53:07.410 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:53:07 localhost nova_compute[297021]: 2025-10-05 09:53:07.410 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:53:07 localhost nova_compute[297021]: 2025-10-05 09:53:07.410 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:53:08 localhost nova_compute[297021]: 2025-10-05 09:53:08.148 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:53:08 localhost nova_compute[297021]: 2025-10-05 09:53:08.170 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:53:08 localhost nova_compute[297021]: 2025-10-05 09:53:08.171 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:53:08 localhost nova_compute[297021]: 2025-10-05 09:53:08.172 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:08 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:53:08 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:53:08 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:08 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:08 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:08 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:08 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:09 localhost podman[305024]: Oct 5 05:53:09 localhost podman[305024]: 2025-10-05 09:53:09.436816799 +0000 UTC m=+0.080025281 container create 627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:09 localhost systemd[1]: Started libpod-conmon-627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6.scope. Oct 5 05:53:09 localhost systemd[1]: Started libcrun container. Oct 5 05:53:09 localhost podman[305024]: 2025-10-05 09:53:09.403965569 +0000 UTC m=+0.047174101 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:09 localhost podman[305024]: 2025-10-05 09:53:09.516375987 +0000 UTC m=+0.159584479 container init 627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, architecture=x86_64, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-09-24T08:57:55, version=7) Oct 5 05:53:09 localhost podman[305024]: 2025-10-05 09:53:09.527067218 +0000 UTC m=+0.170275700 container start 627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public) Oct 5 05:53:09 localhost podman[305024]: 2025-10-05 09:53:09.527383196 +0000 UTC m=+0.170591728 container attach 627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, RELEASE=main, distribution-scope=public, release=553, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph) Oct 5 05:53:09 localhost pedantic_khayyam[305039]: 167 167 Oct 5 05:53:09 localhost systemd[1]: libpod-627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6.scope: Deactivated successfully. Oct 5 05:53:09 localhost podman[305024]: 2025-10-05 09:53:09.534370695 +0000 UTC m=+0.177579207 container died 627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, release=553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, name=rhceph) Oct 5 05:53:09 localhost ceph-mon[303051]: mon.np0005471150@5(peon).osd e82 _set_new_cache_sizes cache_size:1020054621 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:09 localhost podman[305044]: 2025-10-05 09:53:09.641334466 +0000 UTC m=+0.092869210 container remove 627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:53:09 localhost systemd[1]: libpod-conmon-627254df7f48e041916980b577e91d69ec6f86a0d46cb1f31082ed6431889da6.scope: Deactivated successfully. Oct 5 05:53:09 localhost ceph-mon[303051]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:53:09 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[303051]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:09 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:53:10 localhost podman[305113]: Oct 5 05:53:10 localhost podman[305113]: 2025-10-05 09:53:10.385800516 +0000 UTC m=+0.075128619 container create 48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_grothendieck, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.33.12, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Oct 5 05:53:10 localhost systemd[1]: Started libpod-conmon-48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07.scope. Oct 5 05:53:10 localhost systemd[1]: Started libcrun container. Oct 5 05:53:10 localhost systemd[1]: var-lib-containers-storage-overlay-a5f145fd91bbbb8f74a8091440b1473632b59c7ef410cf741a667cd6f8ea5cf1-merged.mount: Deactivated successfully. Oct 5 05:53:10 localhost podman[305113]: 2025-10-05 09:53:10.354568219 +0000 UTC m=+0.043896402 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:10 localhost podman[305113]: 2025-10-05 09:53:10.454388507 +0000 UTC m=+0.143716620 container init 48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_grothendieck, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True) Oct 5 05:53:10 localhost podman[305113]: 2025-10-05 09:53:10.464448589 +0000 UTC m=+0.153776692 container start 48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_grothendieck, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True) Oct 5 05:53:10 localhost podman[305113]: 2025-10-05 09:53:10.464731576 +0000 UTC m=+0.154059679 container attach 48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_grothendieck, ceph=True, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:10 localhost great_grothendieck[305128]: 167 167 Oct 5 05:53:10 localhost systemd[1]: libpod-48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07.scope: Deactivated successfully. Oct 5 05:53:10 localhost podman[305113]: 2025-10-05 09:53:10.469330051 +0000 UTC m=+0.158658204 container died 48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_grothendieck, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Oct 5 05:53:10 localhost podman[305133]: 2025-10-05 09:53:10.570622248 +0000 UTC m=+0.087966886 container remove 48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_grothendieck, build-date=2025-09-24T08:57:55, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:53:10 localhost systemd[1]: libpod-conmon-48fbdc9122f7c309090e58f3615cdd7a754aa8ff4f9854d4dd6380647379ef07.scope: Deactivated successfully. Oct 5 05:53:10 localhost ceph-mon[303051]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:53:10 localhost ceph-mon[303051]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:53:10 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:10 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:10 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:53:11 localhost podman[305212]: Oct 5 05:53:11 localhost podman[305212]: 2025-10-05 09:53:11.392040185 +0000 UTC m=+0.079983219 container create 75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_bartik, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Oct 5 05:53:11 localhost systemd[1]: Started libpod-conmon-75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337.scope. Oct 5 05:53:11 localhost systemd[1]: var-lib-containers-storage-overlay-f7a14e13b8033dc5b25d5e9e3bdc43e7da6be90c32e9c98c672f57508c0a06d5-merged.mount: Deactivated successfully. Oct 5 05:53:11 localhost systemd[1]: Started libcrun container. Oct 5 05:53:11 localhost podman[305212]: 2025-10-05 09:53:11.359319358 +0000 UTC m=+0.047262422 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:11 localhost podman[305212]: 2025-10-05 09:53:11.464781908 +0000 UTC m=+0.152724932 container init 75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_bartik, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:53:11 localhost podman[305212]: 2025-10-05 09:53:11.473988638 +0000 UTC m=+0.161931662 container start 75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_bartik, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, vendor=Red Hat, Inc., RELEASE=main) Oct 5 05:53:11 localhost podman[305212]: 2025-10-05 09:53:11.474216944 +0000 UTC m=+0.162160008 container attach 75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_bartik, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, release=553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph) Oct 5 05:53:11 localhost magical_bartik[305227]: 167 167 Oct 5 05:53:11 localhost systemd[1]: libpod-75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337.scope: Deactivated successfully. Oct 5 05:53:11 localhost podman[305212]: 2025-10-05 09:53:11.478200392 +0000 UTC m=+0.166143476 container died 75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_bartik, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Oct 5 05:53:11 localhost podman[305232]: 2025-10-05 09:53:11.584592478 +0000 UTC m=+0.093905468 container remove 75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_bartik, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7) Oct 5 05:53:11 localhost systemd[1]: libpod-conmon-75716ff57e92f866f917df586d824e7f5b458434cbeb508304a8d35fb4de9337.scope: Deactivated successfully. Oct 5 05:53:11 localhost nova_compute[297021]: 2025-10-05 09:53:11.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:11 localhost ceph-mon[303051]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:53:11 localhost ceph-mon[303051]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:53:11 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:11 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:11 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:11 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:53:12 localhost podman[305272]: 2025-10-05 09:53:12.01314056 +0000 UTC m=+0.084127902 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 05:53:12 localhost podman[305272]: 2025-10-05 09:53:12.024751975 +0000 UTC m=+0.095739317 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:53:12 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:53:12 localhost nova_compute[297021]: 2025-10-05 09:53:12.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:12 localhost systemd[1]: var-lib-containers-storage-overlay-221db1a3d4b33f921cdc368abb72ba3c92b144e8ea8ebc29d53a9c0523fd654c-merged.mount: Deactivated successfully. Oct 5 05:53:12 localhost podman[305324]: Oct 5 05:53:12 localhost podman[305324]: 2025-10-05 09:53:12.471218924 +0000 UTC m=+0.094957106 container create 14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_wilson, name=rhceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, RELEASE=main, version=7, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:12 localhost systemd[1]: Started libpod-conmon-14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829.scope. Oct 5 05:53:12 localhost systemd[1]: Started libcrun container. Oct 5 05:53:12 localhost podman[305324]: 2025-10-05 09:53:12.430640753 +0000 UTC m=+0.054378935 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:12 localhost podman[305324]: 2025-10-05 09:53:12.531671243 +0000 UTC m=+0.155409395 container init 14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_wilson, GIT_CLEAN=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, release=553, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Oct 5 05:53:12 localhost podman[305324]: 2025-10-05 09:53:12.542225329 +0000 UTC m=+0.165963501 container start 14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_wilson, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , release=553, ceph=True, architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:53:12 localhost podman[305324]: 2025-10-05 09:53:12.542575269 +0000 UTC m=+0.166313421 container attach 14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_wilson, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7) Oct 5 05:53:12 localhost recursing_wilson[305338]: 167 167 Oct 5 05:53:12 localhost systemd[1]: libpod-14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829.scope: Deactivated successfully. Oct 5 05:53:12 localhost podman[305324]: 2025-10-05 09:53:12.546214937 +0000 UTC m=+0.169953119 container died 14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_wilson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Oct 5 05:53:12 localhost podman[305343]: 2025-10-05 09:53:12.644590225 +0000 UTC m=+0.084635196 container remove 14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_wilson, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, release=553, distribution-scope=public) Oct 5 05:53:12 localhost systemd[1]: libpod-conmon-14e41b4aa97d4a964dceb4cc1eb01a2663cafa85120083d081b987f1a5c03829.scope: Deactivated successfully. Oct 5 05:53:12 localhost ceph-mon[303051]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:53:12 localhost ceph-mon[303051]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:53:12 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:12 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:12 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:12 localhost ceph-mon[303051]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:13 localhost podman[305413]: Oct 5 05:53:13 localhost podman[305413]: 2025-10-05 09:53:13.335601446 +0000 UTC m=+0.079865717 container create 0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_sinoussi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.33.12, version=7, architecture=x86_64, release=553, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True) Oct 5 05:53:13 localhost systemd[1]: Started libpod-conmon-0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e.scope. Oct 5 05:53:13 localhost systemd[1]: Started libcrun container. Oct 5 05:53:13 localhost podman[305413]: 2025-10-05 09:53:13.397853444 +0000 UTC m=+0.142117735 container init 0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_sinoussi, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Oct 5 05:53:13 localhost podman[305413]: 2025-10-05 09:53:13.303540536 +0000 UTC m=+0.047804857 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:13 localhost podman[305413]: 2025-10-05 09:53:13.407470565 +0000 UTC m=+0.151734866 container start 0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_sinoussi, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, release=553, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:13 localhost podman[305413]: 2025-10-05 09:53:13.407694081 +0000 UTC m=+0.151958372 container attach 0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_sinoussi, release=553, vcs-type=git, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:13 localhost sharp_sinoussi[305428]: 167 167 Oct 5 05:53:13 localhost systemd[1]: libpod-0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e.scope: Deactivated successfully. Oct 5 05:53:13 localhost podman[305413]: 2025-10-05 09:53:13.412165042 +0000 UTC m=+0.156429353 container died 0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_sinoussi, description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc.) Oct 5 05:53:13 localhost systemd[1]: var-lib-containers-storage-overlay-23aadce9676295acf546d1ec5117bba1825b3493c456dbd84fea7abc00ed0b96-merged.mount: Deactivated successfully. Oct 5 05:53:13 localhost systemd[1]: var-lib-containers-storage-overlay-5ee5f568234cf078c9e97bfc8d8a95e22ac420c50499b26182634488f70a74ac-merged.mount: Deactivated successfully. Oct 5 05:53:13 localhost podman[305433]: 2025-10-05 09:53:13.515834193 +0000 UTC m=+0.094970915 container remove 0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_sinoussi, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, release=553, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:13 localhost systemd[1]: libpod-conmon-0bf4618aa235e313b4fed2d13e8ee4d3ee0a6156853d9c39c132734d4fc0b23e.scope: Deactivated successfully. Oct 5 05:53:13 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b0171e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Oct 5 05:53:13 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Oct 5 05:53:13 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Oct 5 05:53:13 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x563234a94000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Oct 5 05:53:13 localhost ceph-mon[303051]: mon.np0005471150@5(peon) e7 my rank is now 4 (was 5) Oct 5 05:53:13 localhost ceph-mon[303051]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:53:13 localhost ceph-mon[303051]: paxos.4).electionLogic(26) init, last seen epoch 26 Oct 5 05:53:13 localhost ceph-mon[303051]: mon.np0005471150@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:13 localhost ceph-mon[303051]: mon.np0005471150@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:13 localhost ceph-mon[303051]: mon.np0005471150@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:13 localhost ceph-mon[303051]: mon.np0005471150@4(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:14 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:53:14 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:53:14 localhost ceph-mon[303051]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:53:14 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:14 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:53:14 localhost ceph-mon[303051]: Remove daemons mon.np0005471146 Oct 5 05:53:14 localhost ceph-mon[303051]: Safe to remove mon.np0005471146: new quorum should be ['np0005471148', 'np0005471147', 'np0005471152', 'np0005471151', 'np0005471150'] (from ['np0005471148', 'np0005471147', 'np0005471152', 'np0005471151', 'np0005471150']) Oct 5 05:53:14 localhost ceph-mon[303051]: Removing monitor np0005471146 from monmap... Oct 5 05:53:14 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "mon rm", "name": "np0005471146"} : dispatch Oct 5 05:53:14 localhost ceph-mon[303051]: Removing daemon mon.np0005471146 from np0005471146.localdomain -- ports [] Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471152 calling monitor election Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471150 calling monitor election Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471151 calling monitor election Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471147 calling monitor election Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471148 calling monitor election Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471148 is new leader, mons np0005471148,np0005471147,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3,4) Oct 5 05:53:14 localhost ceph-mon[303051]: overall HEALTH_OK Oct 5 05:53:14 localhost podman[305504]: Oct 5 05:53:14 localhost podman[305504]: 2025-10-05 09:53:14.237087984 +0000 UTC m=+0.067148012 container create 20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poincare, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph) Oct 5 05:53:14 localhost systemd[1]: Started libpod-conmon-20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249.scope. Oct 5 05:53:14 localhost systemd[1]: Started libcrun container. Oct 5 05:53:14 localhost podman[305504]: 2025-10-05 09:53:14.290984696 +0000 UTC m=+0.121044754 container init 20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poincare, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc.) Oct 5 05:53:14 localhost podman[305504]: 2025-10-05 09:53:14.299571229 +0000 UTC m=+0.129631267 container start 20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poincare, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Oct 5 05:53:14 localhost podman[305504]: 2025-10-05 09:53:14.299757934 +0000 UTC m=+0.129817992 container attach 20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poincare, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., release=553, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Oct 5 05:53:14 localhost tender_poincare[305519]: 167 167 Oct 5 05:53:14 localhost systemd[1]: libpod-20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249.scope: Deactivated successfully. Oct 5 05:53:14 localhost podman[305504]: 2025-10-05 09:53:14.304950335 +0000 UTC m=+0.135010473 container died 20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poincare, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.) Oct 5 05:53:14 localhost podman[305504]: 2025-10-05 09:53:14.211619994 +0000 UTC m=+0.041680092 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:14 localhost podman[305524]: 2025-10-05 09:53:14.403318382 +0000 UTC m=+0.093127926 container remove 20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poincare, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55) Oct 5 05:53:14 localhost systemd[1]: libpod-conmon-20246b045e5e8bcff4ef2b665160b7943f4a8ecbdf1f0513f86eb8998f33b249.scope: Deactivated successfully. Oct 5 05:53:14 localhost systemd[1]: var-lib-containers-storage-overlay-99a82c9eb769b482c59deac103d48884d9a6ca2e03f61049c3a7843e8baae5e5-merged.mount: Deactivated successfully. Oct 5 05:53:14 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:15 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[303051]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:53:15 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:15 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:53:15 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:53:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:53:15 localhost podman[305541]: 2025-10-05 09:53:15.688775484 +0000 UTC m=+0.091926494 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:53:15 localhost podman[305541]: 2025-10-05 09:53:15.730113555 +0000 UTC m=+0.133264605 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true) Oct 5 05:53:15 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:53:16 localhost ceph-mon[303051]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:53:16 localhost ceph-mon[303051]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:53:16 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:16 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:16 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:53:16 localhost nova_compute[297021]: 2025-10-05 09:53:16.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:17 localhost nova_compute[297021]: 2025-10-05 09:53:17.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:17 localhost ceph-mon[303051]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:53:17 localhost ceph-mon[303051]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:53:17 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:17 localhost ceph-mon[303051]: Removed label mon from host np0005471146.localdomain Oct 5 05:53:17 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:17 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:17 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:53:18 localhost systemd[1]: tmp-crun.YLBydO.mount: Deactivated successfully. Oct 5 05:53:18 localhost podman[305567]: 2025-10-05 09:53:18.6834308 +0000 UTC m=+0.092604163 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 05:53:18 localhost podman[305567]: 2025-10-05 09:53:18.698826488 +0000 UTC m=+0.107999851 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:53:18 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:53:18 localhost ceph-mon[303051]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:53:18 localhost ceph-mon[303051]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:53:18 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:18 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:18 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:18 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:19 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:19 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:53:19 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:53:19 localhost ceph-mon[303051]: Removed label mgr from host np0005471146.localdomain Oct 5 05:53:19 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:19 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:19 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:19 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:53:20.452 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:53:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:53:20.453 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:53:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:53:20.454 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:53:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:53:20 localhost systemd[1]: tmp-crun.YcctTj.mount: Deactivated successfully. Oct 5 05:53:20 localhost podman[305587]: 2025-10-05 09:53:20.680222314 +0000 UTC m=+0.090960329 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public) Oct 5 05:53:20 localhost podman[305587]: 2025-10-05 09:53:20.692191438 +0000 UTC m=+0.102929443 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter) Oct 5 05:53:20 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:53:20 localhost ceph-mon[303051]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:53:20 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:53:20 localhost ceph-mon[303051]: Removed label _admin from host np0005471146.localdomain Oct 5 05:53:20 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:20 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:20 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:21 localhost podman[248506]: time="2025-10-05T09:53:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:53:21 localhost podman[248506]: @ - - [05/Oct/2025:09:53:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:53:21 localhost podman[248506]: @ - - [05/Oct/2025:09:53:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18823 "" "Go-http-client/1.1" Oct 5 05:53:21 localhost nova_compute[297021]: 2025-10-05 09:53:21.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:21 localhost ceph-mon[303051]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:53:21 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:53:21 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:21 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:21 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:53:22 localhost openstack_network_exporter[250601]: ERROR 09:53:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:53:22 localhost openstack_network_exporter[250601]: ERROR 09:53:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:22 localhost openstack_network_exporter[250601]: ERROR 09:53:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:22 localhost openstack_network_exporter[250601]: ERROR 09:53:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:53:22 localhost openstack_network_exporter[250601]: Oct 5 05:53:22 localhost openstack_network_exporter[250601]: ERROR 09:53:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:53:22 localhost openstack_network_exporter[250601]: Oct 5 05:53:22 localhost nova_compute[297021]: 2025-10-05 09:53:22.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:22 localhost ceph-mon[303051]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:53:22 localhost ceph-mon[303051]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:53:22 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:22 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:22 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:53:23 localhost ceph-mon[303051]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:53:23 localhost ceph-mon[303051]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:53:23 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[303051]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:53:23 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:23 localhost ceph-mon[303051]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:53:23 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:53:23 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:23 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:53:24 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:25 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:25 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:25 localhost ceph-mon[303051]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:53:25 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:25 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:53:26 localhost nova_compute[297021]: 2025-10-05 09:53:26.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:26 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:26 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:27 localhost nova_compute[297021]: 2025-10-05 09:53:27.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:53:27 localhost podman[305626]: 2025-10-05 09:53:27.551842076 +0000 UTC m=+0.091638256 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:53:27 localhost podman[305626]: 2025-10-05 09:53:27.564977633 +0000 UTC m=+0.104773873 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:53:27 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:53:28 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:28 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:28 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:28 localhost ceph-mon[303051]: Removing np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:28 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[303051]: Removing np0005471146.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:28 localhost ceph-mon[303051]: Removing np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:28 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:28 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:29 localhost ceph-mon[303051]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[303051]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[303051]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[303051]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[303051]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[303051]: Removing daemon mgr.np0005471146.xqzesq from np0005471146.localdomain -- ports [9283, 8765] Oct 5 05:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:53:30 localhost systemd[1]: tmp-crun.ykvyn7.mount: Deactivated successfully. Oct 5 05:53:30 localhost podman[305951]: 2025-10-05 09:53:30.671581805 +0000 UTC m=+0.081458431 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:53:30 localhost podman[305951]: 2025-10-05 09:53:30.684837404 +0000 UTC m=+0.094714010 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:53:30 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:53:31 localhost nova_compute[297021]: 2025-10-05 09:53:31.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:32 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost ceph-mon[303051]: Added label _no_schedule to host np0005471146.localdomain Oct 5 05:53:32 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost ceph-mon[303051]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471146.localdomain Oct 5 05:53:32 localhost ceph-mon[303051]: Removing key for mgr.np0005471146.xqzesq Oct 5 05:53:32 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth rm", "entity": "mgr.np0005471146.xqzesq"} : dispatch Oct 5 05:53:32 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005471146.xqzesq"}]': finished Oct 5 05:53:32 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost nova_compute[297021]: 2025-10-05 09:53:32.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:33 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:33 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[303051]: Removing daemon crash.np0005471146 from np0005471146.localdomain -- ports [] Oct 5 05:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:53:34 localhost podman[306009]: 2025-10-05 09:53:34.209957675 +0000 UTC m=+0.091317318 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:53:34 localhost podman[306009]: 2025-10-05 09:53:34.242550609 +0000 UTC m=+0.123910272 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid) Oct 5 05:53:34 localhost podman[306010]: 2025-10-05 09:53:34.258684556 +0000 UTC m=+0.137499210 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:53:34 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:53:34 localhost podman[306010]: 2025-10-05 09:53:34.297305044 +0000 UTC m=+0.176119688 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:53:34 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:53:34 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain"} : dispatch Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain"}]': finished Oct 5 05:53:34 localhost ceph-mon[303051]: Removed host np0005471146.localdomain Oct 5 05:53:34 localhost ceph-mon[303051]: Removing key for client.crash.np0005471146.localdomain Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth rm", "entity": "client.crash.np0005471146.localdomain"} : dispatch Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005471146.localdomain"}]': finished Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:35 localhost ceph-mon[303051]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:53:35 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:53:35 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:35 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:35 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:36 localhost sshd[306065]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:53:36 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 5 05:53:36 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 5 05:53:36 localhost systemd-logind[760]: New session 69 of user tripleo-admin. Oct 5 05:53:36 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 5 05:53:36 localhost systemd[1]: Starting User Manager for UID 1003... Oct 5 05:53:36 localhost systemd[306069]: Queued start job for default target Main User Target. Oct 5 05:53:36 localhost systemd[306069]: Created slice User Application Slice. Oct 5 05:53:36 localhost systemd[306069]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 05:53:36 localhost systemd[306069]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:53:36 localhost systemd[306069]: Reached target Paths. Oct 5 05:53:36 localhost systemd[306069]: Reached target Timers. Oct 5 05:53:36 localhost systemd[306069]: Starting D-Bus User Message Bus Socket... Oct 5 05:53:36 localhost systemd[306069]: Starting Create User's Volatile Files and Directories... Oct 5 05:53:36 localhost systemd[306069]: Listening on D-Bus User Message Bus Socket. Oct 5 05:53:36 localhost systemd[306069]: Reached target Sockets. Oct 5 05:53:36 localhost systemd[306069]: Finished Create User's Volatile Files and Directories. Oct 5 05:53:36 localhost systemd[306069]: Reached target Basic System. Oct 5 05:53:36 localhost systemd[306069]: Reached target Main User Target. Oct 5 05:53:36 localhost systemd[306069]: Startup finished in 163ms. Oct 5 05:53:36 localhost systemd[1]: Started User Manager for UID 1003. Oct 5 05:53:36 localhost systemd[1]: Started Session 69 of User tripleo-admin. Oct 5 05:53:36 localhost nova_compute[297021]: 2025-10-05 09:53:36.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:36 localhost ceph-mon[303051]: Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:53:36 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:53:36 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:36 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:36 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:37 localhost python3[306211]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.103/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:53:37 localhost nova_compute[297021]: 2025-10-05 09:53:37.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:37 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:53:37 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:53:37 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:37 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:37 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:37 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:38 localhost python3[306357]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.103/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:53:38 localhost ceph-mon[303051]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:53:38 localhost ceph-mon[303051]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:53:38 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:38 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:38 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:39 localhost python3[306502]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.103 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:53:39 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:39 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:53:39 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:53:39 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:39 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:39 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:40 localhost ceph-mon[303051]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:53:40 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:53:40 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:40 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:40 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:41 localhost podman[306556]: Oct 5 05:53:41 localhost podman[306556]: 2025-10-05 09:53:41.146629 +0000 UTC m=+0.074454951 container create 0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_black, RELEASE=main, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph) Oct 5 05:53:41 localhost systemd[1]: Started libpod-conmon-0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d.scope. Oct 5 05:53:41 localhost systemd[1]: Started libcrun container. Oct 5 05:53:41 localhost podman[306556]: 2025-10-05 09:53:41.117459588 +0000 UTC m=+0.045285569 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:41 localhost podman[306556]: 2025-10-05 09:53:41.223961046 +0000 UTC m=+0.151786977 container init 0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_black, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553) Oct 5 05:53:41 localhost podman[306556]: 2025-10-05 09:53:41.235038527 +0000 UTC m=+0.162864468 container start 0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_black, ceph=True, distribution-scope=public, version=7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:53:41 localhost podman[306556]: 2025-10-05 09:53:41.235592803 +0000 UTC m=+0.163418734 container attach 0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_black, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, vcs-type=git, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Oct 5 05:53:41 localhost nervous_black[306571]: 167 167 Oct 5 05:53:41 localhost systemd[1]: libpod-0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d.scope: Deactivated successfully. Oct 5 05:53:41 localhost podman[306556]: 2025-10-05 09:53:41.23955437 +0000 UTC m=+0.167380371 container died 0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_black, io.buildah.version=1.33.12, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph, version=7, CEPH_POINT_RELEASE=) Oct 5 05:53:41 localhost podman[306584]: 2025-10-05 09:53:41.3258346 +0000 UTC m=+0.075132869 container remove 0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_black, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:53:41 localhost systemd[1]: libpod-conmon-0816ef6fb5908722f1b2aa3dd97847cd9180ee413f80ab20ad26caeed24b960d.scope: Deactivated successfully. Oct 5 05:53:41 localhost nova_compute[297021]: 2025-10-05 09:53:41.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:41 localhost ceph-mon[303051]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:53:41 localhost ceph-mon[303051]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:53:41 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:41 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:41 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:53:42 localhost podman[306660]: Oct 5 05:53:42 localhost podman[306660]: 2025-10-05 09:53:42.051706216 +0000 UTC m=+0.079937740 container create 944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_lalande, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, release=553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=) Oct 5 05:53:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:53:42 localhost systemd[1]: Started libpod-conmon-944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252.scope. Oct 5 05:53:42 localhost systemd[1]: Started libcrun container. Oct 5 05:53:42 localhost podman[306660]: 2025-10-05 09:53:42.020390576 +0000 UTC m=+0.048650481 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:42 localhost podman[306660]: 2025-10-05 09:53:42.125717292 +0000 UTC m=+0.153948806 container init 944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_lalande, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:53:42 localhost podman[306660]: 2025-10-05 09:53:42.138544351 +0000 UTC m=+0.166775855 container start 944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_lalande, version=7, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:53:42 localhost podman[306660]: 2025-10-05 09:53:42.138812298 +0000 UTC m=+0.167043812 container attach 944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_lalande, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, version=7, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph) Oct 5 05:53:42 localhost modest_lalande[306677]: 167 167 Oct 5 05:53:42 localhost systemd[1]: libpod-944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252.scope: Deactivated successfully. Oct 5 05:53:42 localhost podman[306660]: 2025-10-05 09:53:42.141384578 +0000 UTC m=+0.169616122 container died 944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_lalande, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, ceph=True, release=553, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:42 localhost systemd[1]: var-lib-containers-storage-overlay-206c1354063e2653c2d92c4a53ff17092ca5d6f95724e3b7bbab5da70a90cc1e-merged.mount: Deactivated successfully. Oct 5 05:53:42 localhost systemd[1]: tmp-crun.V1ZVuY.mount: Deactivated successfully. Oct 5 05:53:42 localhost podman[306694]: 2025-10-05 09:53:42.252049768 +0000 UTC m=+0.096736624 container remove 944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_lalande, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, architecture=x86_64) Oct 5 05:53:42 localhost systemd[1]: libpod-conmon-944ecf800d64b7c634e5d1f169b654347f7db3e97c95fec88340ab7eded72252.scope: Deactivated successfully. Oct 5 05:53:42 localhost podman[306676]: 2025-10-05 09:53:42.21227876 +0000 UTC m=+0.109407438 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:53:42 localhost podman[306676]: 2025-10-05 09:53:42.297879012 +0000 UTC m=+0.195007650 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:53:42 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:53:42 localhost nova_compute[297021]: 2025-10-05 09:53:42.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:42 localhost ceph-mon[303051]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:53:42 localhost ceph-mon[303051]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:53:42 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:42 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:42 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:53:42 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:43 localhost podman[306776]: Oct 5 05:53:43 localhost podman[306776]: 2025-10-05 09:53:43.086729875 +0000 UTC m=+0.066632147 container create 245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_payne, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:53:43 localhost systemd[1]: Started libpod-conmon-245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd.scope. Oct 5 05:53:43 localhost systemd[1]: Started libcrun container. Oct 5 05:53:43 localhost systemd[1]: var-lib-containers-storage-overlay-fc20d22804d335ba0889aa346f3d7e220d9e9d8a878188485ce8c9d29cfd206a-merged.mount: Deactivated successfully. Oct 5 05:53:43 localhost podman[306776]: 2025-10-05 09:53:43.15697085 +0000 UTC m=+0.136873092 container init 245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_payne, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc.) Oct 5 05:53:43 localhost podman[306776]: 2025-10-05 09:53:43.061554532 +0000 UTC m=+0.041456754 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:43 localhost systemd[1]: tmp-crun.dnxry4.mount: Deactivated successfully. Oct 5 05:53:43 localhost podman[306776]: 2025-10-05 09:53:43.169708706 +0000 UTC m=+0.149610938 container start 245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_payne, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux ) Oct 5 05:53:43 localhost podman[306776]: 2025-10-05 09:53:43.170007064 +0000 UTC m=+0.149909326 container attach 245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_payne, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , release=553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main) Oct 5 05:53:43 localhost stoic_payne[306791]: 167 167 Oct 5 05:53:43 localhost systemd[1]: libpod-245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd.scope: Deactivated successfully. Oct 5 05:53:43 localhost podman[306776]: 2025-10-05 09:53:43.172789369 +0000 UTC m=+0.152691601 container died 245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_payne, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, version=7, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12) Oct 5 05:53:43 localhost podman[306796]: 2025-10-05 09:53:43.264764104 +0000 UTC m=+0.082490219 container remove 245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_payne, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12) Oct 5 05:53:43 localhost systemd[1]: libpod-conmon-245a54272e44a03fdc263e8b7dee782a83dde9ed38d1b59e9cb33c38d66764dd.scope: Deactivated successfully. Oct 5 05:53:43 localhost ceph-mon[303051]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:53:43 localhost ceph-mon[303051]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:53:43 localhost ceph-mon[303051]: Saving service mon spec with placement label:mon Oct 5 05:53:43 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:43 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:43 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:44 localhost podman[306874]: Oct 5 05:53:44 localhost podman[306874]: 2025-10-05 09:53:44.130932985 +0000 UTC m=+0.084830322 container create c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_newton, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, ceph=True) Oct 5 05:53:44 localhost systemd[1]: tmp-crun.mbn5pc.mount: Deactivated successfully. Oct 5 05:53:44 localhost systemd[1]: var-lib-containers-storage-overlay-8cf0b793e0fed347f12bfcc2eb19661c209b6c4926faeb307cd1fc0cbb0e8cf6-merged.mount: Deactivated successfully. Oct 5 05:53:44 localhost systemd[1]: Started libpod-conmon-c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6.scope. Oct 5 05:53:44 localhost systemd[1]: Started libcrun container. Oct 5 05:53:44 localhost podman[306874]: 2025-10-05 09:53:44.095970277 +0000 UTC m=+0.049867644 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:44 localhost podman[306874]: 2025-10-05 09:53:44.205340563 +0000 UTC m=+0.159237900 container init c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_newton, version=7, ceph=True, GIT_CLEAN=True, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-type=git) Oct 5 05:53:44 localhost podman[306874]: 2025-10-05 09:53:44.224009199 +0000 UTC m=+0.177906546 container start c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_newton, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:53:44 localhost podman[306874]: 2025-10-05 09:53:44.224317787 +0000 UTC m=+0.178215134 container attach c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_newton, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:53:44 localhost angry_newton[306889]: 167 167 Oct 5 05:53:44 localhost systemd[1]: libpod-c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6.scope: Deactivated successfully. Oct 5 05:53:44 localhost podman[306874]: 2025-10-05 09:53:44.228599363 +0000 UTC m=+0.182496730 container died c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_newton, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.33.12, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Oct 5 05:53:44 localhost podman[306895]: 2025-10-05 09:53:44.333781895 +0000 UTC m=+0.090608638 container remove c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_newton, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=) Oct 5 05:53:44 localhost systemd[1]: libpod-conmon-c6753855afb64b99661fc60f2b219728732bdeacb45925a686cb802becaca5f6.scope: Deactivated successfully. Oct 5 05:53:44 localhost ceph-mon[303051]: mon.np0005471150@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:44 localhost ceph-mon[303051]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:53:44 localhost ceph-mon[303051]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:53:44 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:44 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:44 localhost ceph-mon[303051]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:53:44 localhost ceph-mon[303051]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:44 localhost ceph-mon[303051]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:53:45 localhost podman[306965]: Oct 5 05:53:45 localhost podman[306965]: 2025-10-05 09:53:45.114314853 +0000 UTC m=+0.081795888 container create d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_ardinghelli, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git, GIT_BRANCH=main, version=7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=) Oct 5 05:53:45 localhost systemd[1]: Started libpod-conmon-d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a.scope. Oct 5 05:53:45 localhost systemd[1]: tmp-crun.syue8o.mount: Deactivated successfully. Oct 5 05:53:45 localhost systemd[1]: var-lib-containers-storage-overlay-ef27064d6c3edd03355e3e04d2d4fb9b9c796666b171bad8269d733d72a7994e-merged.mount: Deactivated successfully. Oct 5 05:53:45 localhost podman[306965]: 2025-10-05 09:53:45.081820032 +0000 UTC m=+0.049301057 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:45 localhost systemd[1]: Started libcrun container. Oct 5 05:53:45 localhost podman[306965]: 2025-10-05 09:53:45.22111053 +0000 UTC m=+0.188591555 container init d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_ardinghelli, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , ceph=True) Oct 5 05:53:45 localhost systemd[1]: tmp-crun.EOlCRe.mount: Deactivated successfully. Oct 5 05:53:45 localhost podman[306965]: 2025-10-05 09:53:45.23916007 +0000 UTC m=+0.206641095 container start d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_ardinghelli, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:53:45 localhost podman[306965]: 2025-10-05 09:53:45.239713805 +0000 UTC m=+0.207194830 container attach d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_ardinghelli, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=553) Oct 5 05:53:45 localhost zealous_ardinghelli[306980]: 167 167 Oct 5 05:53:45 localhost systemd[1]: libpod-d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a.scope: Deactivated successfully. Oct 5 05:53:45 localhost podman[306965]: 2025-10-05 09:53:45.242886651 +0000 UTC m=+0.210367686 container died d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_ardinghelli, RELEASE=main, vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:53:45 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x563234bb0000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Oct 5 05:53:45 localhost ceph-mon[303051]: mon.np0005471150@4(peon) e8 removed from monmap, suicide. Oct 5 05:53:45 localhost podman[306985]: 2025-10-05 09:53:45.349469311 +0000 UTC m=+0.098800050 container remove d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_ardinghelli, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 05:53:45 localhost systemd[1]: libpod-conmon-d6e5238bedc07b75a3caf8941979028139eb66f777336d854a8d6393b898ea1a.scope: Deactivated successfully. Oct 5 05:53:45 localhost podman[307010]: 2025-10-05 09:53:45.407436014 +0000 UTC m=+0.072203910 container died e61f9f47ddc8ce6afb93939e90f8e168344b6d36c18d946619f6270f250fee30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_BRANCH=main, ceph=True, release=553, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Oct 5 05:53:45 localhost podman[307010]: 2025-10-05 09:53:45.444681574 +0000 UTC m=+0.109449450 container remove e61f9f47ddc8ce6afb93939e90f8e168344b6d36c18d946619f6270f250fee30 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container) Oct 5 05:53:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:53:46 localhost podman[307115]: 2025-10-05 09:53:46.055602063 +0000 UTC m=+0.127959212 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:53:46 localhost podman[307115]: 2025-10-05 09:53:46.095104084 +0000 UTC m=+0.167461273 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller) Oct 5 05:53:46 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:53:46 localhost systemd[1]: var-lib-containers-storage-overlay-3e8b5cbf01ab94243f05d55305459b51788c8d8b1a8510ba23667fe66c7f022f-merged.mount: Deactivated successfully. Oct 5 05:53:46 localhost systemd[1]: var-lib-containers-storage-overlay-2a78ab956b03a2182105dc1635b23e472806302a7dfce9362c3d857d43d91f75-merged.mount: Deactivated successfully. Oct 5 05:53:46 localhost systemd[1]: ceph-659062ac-50b4-5607-b699-3105da7f55ee@mon.np0005471150.service: Deactivated successfully. Oct 5 05:53:46 localhost systemd[1]: Stopped Ceph mon.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:53:46 localhost systemd[1]: ceph-659062ac-50b4-5607-b699-3105da7f55ee@mon.np0005471150.service: Consumed 4.273s CPU time. Oct 5 05:53:46 localhost systemd[1]: Reloading. Oct 5 05:53:46 localhost systemd-rc-local-generator[307200]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:53:46 localhost systemd-sysv-generator[307203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:53:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:53:46 localhost nova_compute[297021]: 2025-10-05 09:53:46.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:47 localhost nova_compute[297021]: 2025-10-05 09:53:47.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:53:49 localhost podman[307209]: 2025-10-05 09:53:49.693713709 +0000 UTC m=+0.098409220 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:53:49 localhost podman[307209]: 2025-10-05 09:53:49.708554491 +0000 UTC m=+0.113249982 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:53:49 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:53:51 localhost podman[248506]: time="2025-10-05T09:53:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:53:51 localhost podman[248506]: @ - - [05/Oct/2025:09:53:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 141657 "" "Go-http-client/1.1" Oct 5 05:53:51 localhost podman[248506]: @ - - [05/Oct/2025:09:53:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18349 "" "Go-http-client/1.1" Oct 5 05:53:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:53:51 localhost podman[307228]: 2025-10-05 09:53:51.673069158 +0000 UTC m=+0.084089401 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 05:53:51 localhost podman[307228]: 2025-10-05 09:53:51.711512571 +0000 UTC m=+0.122532804 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 05:53:51 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:53:51 localhost nova_compute[297021]: 2025-10-05 09:53:51.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:52 localhost openstack_network_exporter[250601]: ERROR 09:53:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:52 localhost openstack_network_exporter[250601]: ERROR 09:53:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:52 localhost openstack_network_exporter[250601]: ERROR 09:53:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:53:52 localhost openstack_network_exporter[250601]: ERROR 09:53:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:53:52 localhost openstack_network_exporter[250601]: Oct 5 05:53:52 localhost openstack_network_exporter[250601]: ERROR 09:53:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:53:52 localhost openstack_network_exporter[250601]: Oct 5 05:53:52 localhost nova_compute[297021]: 2025-10-05 09:53:52.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:55 localhost sshd[307351]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:53:55 localhost podman[307358]: 2025-10-05 09:53:55.748471137 +0000 UTC m=+0.090230488 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, architecture=x86_64, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:53:55 localhost podman[307358]: 2025-10-05 09:53:55.851805849 +0000 UTC m=+0.193565190 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:56 localhost nova_compute[297021]: 2025-10-05 09:53:56.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:57 localhost nova_compute[297021]: 2025-10-05 09:53:57.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:53:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:53:57 localhost podman[307747]: 2025-10-05 09:53:57.80717852 +0000 UTC m=+0.091438662 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:53:57 localhost podman[307747]: 2025-10-05 09:53:57.820766048 +0000 UTC m=+0.105026230 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:53:57 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:53:59 localhost podman[307903]: Oct 5 05:53:59 localhost podman[307903]: 2025-10-05 09:53:59.185590643 +0000 UTC m=+0.076574187 container create 9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_spence, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, maintainer=Guillaume Abrioux , release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Oct 5 05:53:59 localhost systemd[1]: Started libpod-conmon-9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5.scope. Oct 5 05:53:59 localhost systemd[1]: Started libcrun container. Oct 5 05:53:59 localhost podman[307903]: 2025-10-05 09:53:59.154303325 +0000 UTC m=+0.045286899 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:59 localhost podman[307903]: 2025-10-05 09:53:59.256219878 +0000 UTC m=+0.147203422 container init 9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_spence, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55, name=rhceph, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:53:59 localhost podman[307903]: 2025-10-05 09:53:59.266166698 +0000 UTC m=+0.157150242 container start 9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_spence, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=) Oct 5 05:53:59 localhost podman[307903]: 2025-10-05 09:53:59.266449036 +0000 UTC m=+0.157432620 container attach 9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_spence, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_BRANCH=main, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:59 localhost inspiring_spence[307918]: 167 167 Oct 5 05:53:59 localhost systemd[1]: libpod-9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5.scope: Deactivated successfully. Oct 5 05:53:59 localhost podman[307903]: 2025-10-05 09:53:59.26991431 +0000 UTC m=+0.160897844 container died 9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_spence, name=rhceph, build-date=2025-09-24T08:57:55, version=7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, ceph=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Oct 5 05:53:59 localhost podman[307923]: 2025-10-05 09:53:59.361335199 +0000 UTC m=+0.083227208 container remove 9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_spence, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main) Oct 5 05:53:59 localhost systemd[1]: libpod-conmon-9d836b3a57bae8d307bccd2a65e9f2c4f4a1321c8db22bd8fa1dd0d909a1e2d5.scope: Deactivated successfully. Oct 5 05:53:59 localhost podman[307939]: Oct 5 05:53:59 localhost podman[307939]: 2025-10-05 09:53:59.479096683 +0000 UTC m=+0.080987417 container create 4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hopper, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.tags=rhceph ceph, release=553, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:53:59 localhost systemd[1]: Started libpod-conmon-4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77.scope. Oct 5 05:53:59 localhost systemd[1]: Started libcrun container. Oct 5 05:53:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8854bb511e991c6c9000fafa2b443e7a900a3e8187195df2bfd8ebf43296c4c7/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:53:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8854bb511e991c6c9000fafa2b443e7a900a3e8187195df2bfd8ebf43296c4c7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Oct 5 05:53:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8854bb511e991c6c9000fafa2b443e7a900a3e8187195df2bfd8ebf43296c4c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:53:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8854bb511e991c6c9000fafa2b443e7a900a3e8187195df2bfd8ebf43296c4c7/merged/var/lib/ceph/mon/ceph-np0005471150 supports timestamps until 2038 (0x7fffffff) Oct 5 05:53:59 localhost podman[307939]: 2025-10-05 09:53:59.542500112 +0000 UTC m=+0.144390846 container init 4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hopper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=553, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Oct 5 05:53:59 localhost podman[307939]: 2025-10-05 09:53:59.446279473 +0000 UTC m=+0.048170217 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:59 localhost podman[307939]: 2025-10-05 09:53:59.551416675 +0000 UTC m=+0.153307419 container start 4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hopper, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, version=7) Oct 5 05:53:59 localhost podman[307939]: 2025-10-05 09:53:59.551638471 +0000 UTC m=+0.153529235 container attach 4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hopper, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:53:59 localhost systemd[1]: libpod-4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77.scope: Deactivated successfully. Oct 5 05:53:59 localhost podman[307939]: 2025-10-05 09:53:59.648531708 +0000 UTC m=+0.250422442 container died 4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hopper, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, name=rhceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True) Oct 5 05:53:59 localhost podman[307981]: 2025-10-05 09:53:59.74079741 +0000 UTC m=+0.084188184 container remove 4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hopper, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-09-24T08:57:55) Oct 5 05:53:59 localhost systemd[1]: libpod-conmon-4f2febe58905dcfcb9da62a471fe79d37d7d724af7e8035f2f40e167592bed77.scope: Deactivated successfully. Oct 5 05:53:59 localhost systemd[1]: Reloading. Oct 5 05:53:59 localhost systemd-rc-local-generator[308017]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:53:59 localhost systemd-sysv-generator[308023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:54:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:54:00 localhost systemd[1]: var-lib-containers-storage-overlay-080a84bebcf3e353d20150225bccd5e68393719ef6ac1eceb5b18d649e9ab1f8-merged.mount: Deactivated successfully. Oct 5 05:54:00 localhost systemd[1]: Reloading. Oct 5 05:54:00 localhost systemd-sysv-generator[308066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:54:00 localhost systemd-rc-local-generator[308062]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:54:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:54:00 localhost systemd[1]: Starting Ceph mon.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:54:00 localhost podman[308124]: Oct 5 05:54:00 localhost podman[308124]: 2025-10-05 09:54:00.921530663 +0000 UTC m=+0.079418436 container create bec3c0675bd567a8306fd74801d38a255b68dc775f31aa8fff165c107ad78fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:54:00 localhost systemd[1]: tmp-crun.EpAU4o.mount: Deactivated successfully. Oct 5 05:54:00 localhost podman[308124]: 2025-10-05 09:54:00.889089762 +0000 UTC m=+0.046977585 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d8d0291cee6a31611b6f7cd1b7350ecc8f8bc89cf5b000929d64fb6fd69da9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:54:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d8d0291cee6a31611b6f7cd1b7350ecc8f8bc89cf5b000929d64fb6fd69da9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:54:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d8d0291cee6a31611b6f7cd1b7350ecc8f8bc89cf5b000929d64fb6fd69da9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:54:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d8d0291cee6a31611b6f7cd1b7350ecc8f8bc89cf5b000929d64fb6fd69da9/merged/var/lib/ceph/mon/ceph-np0005471150 supports timestamps until 2038 (0x7fffffff) Oct 5 05:54:01 localhost podman[308124]: 2025-10-05 09:54:00.999945639 +0000 UTC m=+0.157833422 container init bec3c0675bd567a8306fd74801d38a255b68dc775f31aa8fff165c107ad78fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, RELEASE=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:01 localhost podman[308124]: 2025-10-05 09:54:01.009518728 +0000 UTC m=+0.167406501 container start bec3c0675bd567a8306fd74801d38a255b68dc775f31aa8fff165c107ad78fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471150, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7) Oct 5 05:54:01 localhost bash[308124]: bec3c0675bd567a8306fd74801d38a255b68dc775f31aa8fff165c107ad78fb6 Oct 5 05:54:01 localhost systemd[1]: Started Ceph mon.np0005471150 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:54:01 localhost ceph-mon[308154]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:54:01 localhost ceph-mon[308154]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Oct 5 05:54:01 localhost ceph-mon[308154]: pidfile_write: ignore empty --pid-file Oct 5 05:54:01 localhost ceph-mon[308154]: load: jerasure load: lrc Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: RocksDB version: 7.9.2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Git sha 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: DB SUMMARY Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: DB Session ID: J2NOOSTRKLEUC7SFP9C2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: CURRENT file: CURRENT Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: IDENTITY file: IDENTITY Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005471150/store.db dir, Total Num: 0, files: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005471150/store.db: 000004.log size: 886 ; Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.error_if_exists: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.create_if_missing: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.paranoid_checks: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.env: 0x55e07528f9e0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.fs: PosixFileSystem Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.info_log: 0x55e0777f6d20 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.statistics: (nil) Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.use_fsync: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_log_file_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.allow_fallocate: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.use_direct_reads: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.create_missing_column_families: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.db_log_dir: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.wal_dir: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.advise_random_on_open: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.write_buffer_manager: 0x55e077807540 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.rate_limiter: (nil) Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.unordered_write: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.row_cache: None Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.wal_filter: None Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.two_write_queues: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.manual_wal_flush: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.wal_compression: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.atomic_flush: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.log_readahead_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.db_host_id: __hostname__ Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_background_jobs: 2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_background_compactions: -1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_subcompactions: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_total_wal_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_open_files: -1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bytes_per_sync: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_readahead_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_background_flushes: -1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Compression algorithms supported: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kZSTD supported: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kXpressCompression supported: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kZlibCompression supported: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005471150/store.db/MANIFEST-000005 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.merge_operator: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_filter: None Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_filter_factory: None Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.sst_partitioner_factory: None Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e0777f6980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e0777f3350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.write_buffer_size: 33554432 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_write_buffer_number: 2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression: NoCompression Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression: Disabled Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.prefix_extractor: nullptr Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.num_levels: 7 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.level: 32767 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.enabled: false Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_base: 268435456 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.arena_block_size: 1048576 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.table_properties_collectors: Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.inplace_update_support: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.bloom_locality: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.max_successive_merges: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.force_consistency_checks: 1 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.ttl: 2592000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.enable_blob_files: false Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.min_blob_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.blob_file_size: 268435456 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005471150/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658041054027, "job": 1, "event": "recovery_started", "wal_files": [4]} Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658041056535, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658041056646, "job": 1, "event": "recovery_finished"} Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e07781ae00 Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: DB pointer 0x55e077910000 Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150 does not exist in monmap, will attempt to join an existing cluster Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:54:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e0777f3350#2 capacity: 512.00 MB usage: 1.30 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,1.08 KB,0.000205636%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 05:54:01 localhost ceph-mon[308154]: using public_addr v2:172.18.0.103:0/0 -> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] Oct 5 05:54:01 localhost ceph-mon[308154]: starting mon.np0005471150 rank -1 at public addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] at bind addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005471150 fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(???) e0 preinit fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing) e8 sync_obtain_latest_monmap Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing) e8 sync_obtain_latest_monmap obtained monmap e8 Oct 5 05:54:01 localhost podman[308137]: 2025-10-05 09:54:01.096340813 +0000 UTC m=+0.139784242 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:54:01 localhost podman[308137]: 2025-10-05 09:54:01.104278738 +0000 UTC m=+0.147722137 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:54:01 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).mds e16 new map Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-05T08:04:17.819317+0000#012modified#0112025-10-05T09:51:24.604984+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01180#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26863}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26863 members: 26863#012[mds.mds.np0005471152.pozuqw{0:26863} state up:active seq 14 addr [v2:172.18.0.108:6808/114949388,v1:172.18.0.108:6809/114949388] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005471151.uyxcpj{-1:17211} state up:standby seq 1 addr [v2:172.18.0.107:6808/3905827397,v1:172.18.0.107:6809/3905827397] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005471150.bsiqok{-1:17217} state up:standby seq 1 addr [v2:172.18.0.106:6808/1854153836,v1:172.18.0.106:6809/1854153836] compat {c=[1],r=[1],i=[17ff]}] Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).osd e82 crush map has features 3314933000854323200, adjusting msgr requires Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).osd e82 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).osd e82 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).osd e82 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: Deploying daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:01 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:01 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(synchronizing).paxosservice(auth 1..37) refresh upgraded, format 0 -> 3 Oct 5 05:54:01 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x563234abc000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:01 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:01 localhost nova_compute[297021]: 2025-10-05 09:54:01.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:02 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:02 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:02 localhost nova_compute[297021]: 2025-10-05 09:54:02.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:02 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:02 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:03 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e8 handle_auth_request failed to assign global_id Oct 5 05:54:03 localhost ceph-mon[308154]: mon.np0005471150@-1(probing) e9 my rank is now 4 (was -1) Oct 5 05:54:03 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:54:03 localhost ceph-mon[308154]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Oct 5 05:54:03 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:03 localhost nova_compute[297021]: 2025-10-05 09:54:03.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:03 localhost nova_compute[297021]: 2025-10-05 09:54:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:03 localhost nova_compute[297021]: 2025-10-05 09:54:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:03 localhost nova_compute[297021]: 2025-10-05 09:54:03.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:54:03 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.453 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.453 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.453 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.454 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:54:04 localhost nova_compute[297021]: 2025-10-05 09:54:04.454 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:54:04 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:04 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:54:04 localhost systemd[1]: tmp-crun.u7aA7N.mount: Deactivated successfully. Oct 5 05:54:04 localhost podman[308215]: 2025-10-05 09:54:04.67733105 +0000 UTC m=+0.080235467 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 05:54:04 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:04 localhost podman[308215]: 2025-10-05 09:54:04.716839752 +0000 UTC m=+0.119744119 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 05:54:04 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:54:04 localhost podman[308214]: 2025-10-05 09:54:04.76399019 +0000 UTC m=+0.165709365 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 5 05:54:04 localhost podman[308214]: 2025-10-05 09:54:04.773055076 +0000 UTC m=+0.174774251 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:54:04 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:04 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:54:05 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:05 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:05 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:05 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:05 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:05 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 handle_auth_request failed to assign global_id Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:06 localhost ceph-mon[308154]: mgrc update_daemon_metadata mon.np0005471150 metadata {addrs=[v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005471150.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005471150.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Oct 5 05:54:06 localhost ceph-mon[308154]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:06 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471148 calling monitor election Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471147 calling monitor election Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471151 in quorum (ranks 0,2,3) Oct 5 05:54:06 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471148 calling monitor election Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471148 is new leader, mons np0005471148,np0005471147,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3,4) Oct 5 05:54:06 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:54:06 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:06 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:06 localhost nova_compute[297021]: 2025-10-05 09:54:06.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:06 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:07 localhost podman[308305]: Oct 5 05:54:07 localhost podman[308305]: 2025-10-05 09:54:07.144541152 +0000 UTC m=+0.087082013 container create 89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_jennings, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, distribution-scope=public, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vendor=Red Hat, Inc., release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container) Oct 5 05:54:07 localhost systemd[1]: Started libpod-conmon-89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9.scope. Oct 5 05:54:07 localhost podman[308305]: 2025-10-05 09:54:07.107102617 +0000 UTC m=+0.049643478 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:07 localhost systemd[1]: Started libcrun container. Oct 5 05:54:07 localhost podman[308305]: 2025-10-05 09:54:07.248179232 +0000 UTC m=+0.190720093 container init 89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_jennings, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:54:07 localhost podman[308305]: 2025-10-05 09:54:07.258225415 +0000 UTC m=+0.200766266 container start 89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_jennings, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:54:07 localhost podman[308305]: 2025-10-05 09:54:07.258545804 +0000 UTC m=+0.201086715 container attach 89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_jennings, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Oct 5 05:54:07 localhost systemd[1]: libpod-89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9.scope: Deactivated successfully. Oct 5 05:54:07 localhost xenodochial_jennings[308320]: 167 167 Oct 5 05:54:07 localhost podman[308305]: 2025-10-05 09:54:07.267724962 +0000 UTC m=+0.210265823 container died 89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_jennings, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55) Oct 5 05:54:07 localhost podman[308325]: 2025-10-05 09:54:07.37638955 +0000 UTC m=+0.095422289 container remove 89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_jennings, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 5 05:54:07 localhost systemd[1]: libpod-conmon-89b45c2f80b77920783b0038da7777a3dcc36f7e88685d365d916d3a986abde9.scope: Deactivated successfully. Oct 5 05:54:07 localhost nova_compute[297021]: 2025-10-05 09:54:07.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:07 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:07 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:54:07 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:07 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:54:07 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:54:07 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:07 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:07 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:07 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:07 localhost nova_compute[297021]: 2025-10-05 09:54:07.924 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.004 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.004 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:54:08 localhost podman[308406]: Oct 5 05:54:08 localhost systemd[1]: var-lib-containers-storage-overlay-f52ee321cc46e3d1d29e39d35113328e9676347149d2fafd2e6e88ce91afbcbf-merged.mount: Deactivated successfully. Oct 5 05:54:08 localhost podman[308406]: 2025-10-05 09:54:08.156923828 +0000 UTC m=+0.104379752 container create 981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_williamson, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=) Oct 5 05:54:08 localhost systemd[1]: Started libpod-conmon-981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608.scope. Oct 5 05:54:08 localhost podman[308406]: 2025-10-05 09:54:08.110231142 +0000 UTC m=+0.057687096 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:08 localhost systemd[1]: Started libcrun container. Oct 5 05:54:08 localhost podman[308406]: 2025-10-05 09:54:08.225846437 +0000 UTC m=+0.173302361 container init 981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_williamson, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, release=553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.226 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.227 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11745MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.228 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.228 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:54:08 localhost podman[308406]: 2025-10-05 09:54:08.235981172 +0000 UTC m=+0.183437096 container start 981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_williamson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 05:54:08 localhost podman[308406]: 2025-10-05 09:54:08.23627729 +0000 UTC m=+0.183733244 container attach 981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_williamson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Oct 5 05:54:08 localhost vigilant_williamson[308421]: 167 167 Oct 5 05:54:08 localhost systemd[1]: libpod-981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608.scope: Deactivated successfully. Oct 5 05:54:08 localhost podman[308406]: 2025-10-05 09:54:08.240570036 +0000 UTC m=+0.188026040 container died 981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_williamson, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64) Oct 5 05:54:08 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:08 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.327 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.327 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.328 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:54:08 localhost podman[308426]: 2025-10-05 09:54:08.342251854 +0000 UTC m=+0.089341714 container remove 981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_williamson, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, ceph=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, release=553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container) Oct 5 05:54:08 localhost systemd[1]: libpod-conmon-981bf0897f5d914620fc4f513c78f96c57982c33c25d9e50a7a1460a1e16d608.scope: Deactivated successfully. Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.382 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:54:08 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e9 handle_auth_request failed to assign global_id Oct 5 05:54:08 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:54:08 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:08 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:08 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:08 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.852 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.861 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.881 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.885 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:54:08 localhost nova_compute[297021]: 2025-10-05 09:54:08.885 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:54:09 localhost podman[308525]: Oct 5 05:54:09 localhost systemd[1]: var-lib-containers-storage-overlay-a5ef8e4a8a7ce7c72d968334c673f54819f5683b7360066b30026dda1d5ead6e-merged.mount: Deactivated successfully. Oct 5 05:54:09 localhost podman[308525]: 2025-10-05 09:54:09.156708862 +0000 UTC m=+0.055523196 container create c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_lichterman, distribution-scope=public, name=rhceph, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7) Oct 5 05:54:09 localhost systemd[1]: Started libpod-conmon-c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514.scope. Oct 5 05:54:09 localhost systemd[1]: Started libcrun container. Oct 5 05:54:09 localhost podman[308525]: 2025-10-05 09:54:09.217613124 +0000 UTC m=+0.116427488 container init c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_lichterman, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Oct 5 05:54:09 localhost podman[308525]: 2025-10-05 09:54:09.128803825 +0000 UTC m=+0.027618169 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:09 localhost podman[308525]: 2025-10-05 09:54:09.229730282 +0000 UTC m=+0.128544686 container start c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_lichterman, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:54:09 localhost cranky_lichterman[308540]: 167 167 Oct 5 05:54:09 localhost podman[308525]: 2025-10-05 09:54:09.230040102 +0000 UTC m=+0.128854476 container attach c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_lichterman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, release=553, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Oct 5 05:54:09 localhost systemd[1]: libpod-c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514.scope: Deactivated successfully. Oct 5 05:54:09 localhost podman[308525]: 2025-10-05 09:54:09.232309332 +0000 UTC m=+0.131123736 container died c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_lichterman, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:54:09 localhost podman[308545]: 2025-10-05 09:54:09.329238795 +0000 UTC m=+0.089047960 container remove c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_lichterman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:54:09 localhost systemd[1]: libpod-conmon-c0340eccc16e70cc790da56e4ddd3f8e16a8a59573f0fb524b9894594310d514.scope: Deactivated successfully. Oct 5 05:54:09 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:54:09 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:09 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:09 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:09 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:10 localhost systemd[1]: var-lib-containers-storage-overlay-40e5317e9331ec5584eebdde230e496496b5b01c9c2ebcf356dd6948f0d0c618-merged.mount: Deactivated successfully. Oct 5 05:54:10 localhost podman[308620]: Oct 5 05:54:10 localhost podman[308620]: 2025-10-05 09:54:10.18233915 +0000 UTC m=+0.085908201 container create 8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_hypatia, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:10 localhost systemd[1]: Started libpod-conmon-8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294.scope. Oct 5 05:54:10 localhost systemd[1]: Started libcrun container. Oct 5 05:54:10 localhost podman[308620]: 2025-10-05 09:54:10.146379848 +0000 UTC m=+0.049948939 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:10 localhost podman[308620]: 2025-10-05 09:54:10.255109223 +0000 UTC m=+0.158678274 container init 8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_hypatia, name=rhceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:54:10 localhost podman[308620]: 2025-10-05 09:54:10.272783981 +0000 UTC m=+0.176353032 container start 8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_hypatia, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=553, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Oct 5 05:54:10 localhost podman[308620]: 2025-10-05 09:54:10.273168622 +0000 UTC m=+0.176737713 container attach 8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_hypatia, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, vcs-type=git, version=7, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55) Oct 5 05:54:10 localhost stoic_hypatia[308635]: 167 167 Oct 5 05:54:10 localhost systemd[1]: libpod-8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294.scope: Deactivated successfully. Oct 5 05:54:10 localhost podman[308620]: 2025-10-05 09:54:10.276354257 +0000 UTC m=+0.179923318 container died 8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_hypatia, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:54:10 localhost podman[308640]: 2025-10-05 09:54:10.380003006 +0000 UTC m=+0.096086575 container remove 8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_hypatia, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Oct 5 05:54:10 localhost systemd[1]: libpod-conmon-8349e3bd12d682911aff2a5ad24c402426c751950740a4a6761486ce43cd4294.scope: Deactivated successfully. Oct 5 05:54:10 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:54:10 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:54:10 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:10 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:10 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:11 localhost podman[308709]: Oct 5 05:54:11 localhost podman[308709]: 2025-10-05 09:54:11.100717855 +0000 UTC m=+0.066754052 container create 43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cohen, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True) Oct 5 05:54:11 localhost systemd[1]: Started libpod-conmon-43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661.scope. Oct 5 05:54:11 localhost systemd[1]: Started libcrun container. Oct 5 05:54:11 localhost systemd[1]: tmp-crun.9QXUk1.mount: Deactivated successfully. Oct 5 05:54:11 localhost systemd[1]: var-lib-containers-storage-overlay-e023118af14400d56edb2a25028a2dc01a9c4ba82290092dec89bf1c6fbe3fa4-merged.mount: Deactivated successfully. Oct 5 05:54:11 localhost podman[308709]: 2025-10-05 09:54:11.160653984 +0000 UTC m=+0.126690181 container init 43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cohen, ceph=True, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git) Oct 5 05:54:11 localhost podman[308709]: 2025-10-05 09:54:11.066358468 +0000 UTC m=+0.032394695 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:11 localhost brave_cohen[308725]: 167 167 Oct 5 05:54:11 localhost systemd[1]: libpod-43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661.scope: Deactivated successfully. Oct 5 05:54:11 localhost podman[308709]: 2025-10-05 09:54:11.171626471 +0000 UTC m=+0.137662758 container start 43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cohen, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True) Oct 5 05:54:11 localhost podman[308709]: 2025-10-05 09:54:11.172005831 +0000 UTC m=+0.138042038 container attach 43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cohen, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:54:11 localhost podman[308709]: 2025-10-05 09:54:11.174540099 +0000 UTC m=+0.140576326 container died 43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cohen, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Oct 5 05:54:11 localhost systemd[1]: var-lib-containers-storage-overlay-402feeaa0eb0625bec2d77f5348a8cba00cc8cd47cf2b2f30ca5233d24753e67-merged.mount: Deactivated successfully. Oct 5 05:54:11 localhost podman[308730]: 2025-10-05 09:54:11.259707278 +0000 UTC m=+0.076447574 container remove 43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_cohen, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhceph, release=553, vcs-type=git, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:11 localhost systemd[1]: libpod-conmon-43f25a7a700fc3abb8fb885d6463976719d372819c51d9097d2db4a2b53f1661.scope: Deactivated successfully. Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.381182) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051381310, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10653, "num_deletes": 254, "total_data_size": 14886539, "memory_usage": 15419160, "flush_reason": "Manual Compaction"} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051438785, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 13441330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10658, "table_properties": {"data_size": 13381377, "index_size": 33287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 271504, "raw_average_key_size": 26, "raw_value_size": 13204956, "raw_average_value_size": 1289, "num_data_blocks": 1278, "num_entries": 10238, "num_filter_entries": 10238, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 1759658041, "file_creation_time": 1759658051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 57646 microseconds, and 18939 cpu microseconds. Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.438849) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 13441330 bytes OK Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.438876) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.441044) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.441065) EVENT_LOG_v1 {"time_micros": 1759658051441058, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.441084) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14812626, prev total WAL file size 14857077, number of live WAL files 2. Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.443726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(12MB) 8(2012B)] Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051443836, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 13443342, "oldest_snapshot_seqno": -1} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9988 keys, 13438070 bytes, temperature: kUnknown Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051550126, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 13438070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13378742, "index_size": 33243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25029, "raw_key_size": 266735, "raw_average_key_size": 26, "raw_value_size": 13205611, "raw_average_value_size": 1322, "num_data_blocks": 1277, "num_entries": 9988, "num_filter_entries": 9988, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.550695) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 13438070 bytes Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.552717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 126.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(12.8, 0.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10243, records dropped: 255 output_compression: NoCompression Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.552750) EVENT_LOG_v1 {"time_micros": 1759658051552734, "job": 4, "event": "compaction_finished", "compaction_time_micros": 106483, "compaction_time_cpu_micros": 41272, "output_level": 6, "num_output_files": 1, "total_output_size": 13438070, "num_input_records": 10243, "num_output_records": 9988, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051555456, "job": 4, "event": "table_file_deletion", "file_number": 14} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051555532, "job": 4, "event": "table_file_deletion", "file_number": 8} Oct 5 05:54:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:11.443588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost nova_compute[297021]: 2025-10-05 09:54:11.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:11 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:54:11 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:54:11 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:11 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:11 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:11 localhost nova_compute[297021]: 2025-10-05 09:54:11.885 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:11 localhost nova_compute[297021]: 2025-10-05 09:54:11.886 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:54:11 localhost nova_compute[297021]: 2025-10-05 09:54:11.886 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:54:12 localhost nova_compute[297021]: 2025-10-05 09:54:12.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:54:12 localhost podman[308747]: 2025-10-05 09:54:12.687868499 +0000 UTC m=+0.089968260 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:54:12 localhost podman[308747]: 2025-10-05 09:54:12.722009881 +0000 UTC m=+0.124109632 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:54:12 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:54:12 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:54:12 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:54:12 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:12 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:12 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:54:12 localhost nova_compute[297021]: 2025-10-05 09:54:12.918 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:54:12 localhost nova_compute[297021]: 2025-10-05 09:54:12.919 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:54:12 localhost nova_compute[297021]: 2025-10-05 09:54:12.919 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:54:12 localhost nova_compute[297021]: 2025-10-05 09:54:12.919 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:54:13 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:54:13 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:13 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:13 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:13 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:54:14 localhost ceph-mon[308154]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:54:14 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:15 localhost nova_compute[297021]: 2025-10-05 09:54:15.184 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:54:15 localhost nova_compute[297021]: 2025-10-05 09:54:15.204 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:54:15 localhost nova_compute[297021]: 2025-10-05 09:54:15.204 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:54:15 localhost nova_compute[297021]: 2025-10-05 09:54:15.205 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:15 localhost ceph-mon[308154]: Reconfig service osd.default_drive_group Oct 5 05:54:15 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:54:15 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:54:15 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:15 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:15 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e82 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Oct 5 05:54:16 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e82 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Oct 5 05:54:16 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e83 e83: 6 total, 6 up, 6 in Oct 5 05:54:16 localhost systemd[1]: session-68.scope: Deactivated successfully. Oct 5 05:54:16 localhost systemd[1]: session-68.scope: Consumed 26.404s CPU time. Oct 5 05:54:16 localhost systemd-logind[760]: Session 68 logged out. Waiting for processes to exit. Oct 5 05:54:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:54:16 localhost systemd-logind[760]: Removed session 68. Oct 5 05:54:16 localhost podman[308765]: 2025-10-05 09:54:16.553974145 +0000 UTC m=+0.090244757 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:54:16 localhost podman[308765]: 2025-10-05 09:54:16.667742207 +0000 UTC m=+0.204012809 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:54:16 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:54:16 localhost sshd[308790]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:54:16 localhost nova_compute[297021]: 2025-10-05 09:54:16.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:16 localhost systemd-logind[760]: New session 71 of user ceph-admin. Oct 5 05:54:16 localhost systemd[1]: Started Session 71 of User ceph-admin. Oct 5 05:54:16 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:54:16 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/496180965' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: Activating manager daemon np0005471152.kbhlus Oct 5 05:54:16 localhost ceph-mon[308154]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:54:16 localhost ceph-mon[308154]: Manager daemon np0005471152.kbhlus is now available Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"}]': finished Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"}]': finished Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:54:16 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:54:17 localhost nova_compute[297021]: 2025-10-05 09:54:17.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:17 localhost ceph-mon[308154]: removing stray HostCache host record np0005471146.localdomain.devices.0 Oct 5 05:54:17 localhost podman[308903]: 2025-10-05 09:54:17.986612867 +0000 UTC m=+0.109402915 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, version=7, io.openshift.expose-services=, ceph=True, release=553, architecture=x86_64, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:54:18 localhost podman[308903]: 2025-10-05 09:54:18.093137463 +0000 UTC m=+0.215927551 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:54:18 localhost ceph-mon[308154]: [05/Oct/2025:09:54:17] ENGINE Bus STARTING Oct 5 05:54:18 localhost ceph-mon[308154]: [05/Oct/2025:09:54:18] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:54:19 localhost systemd[1]: tmp-crun.o8pbqA.mount: Deactivated successfully. Oct 5 05:54:19 localhost podman[309111]: 2025-10-05 09:54:19.884342147 +0000 UTC m=+0.100743071 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 05:54:19 localhost podman[309111]: 2025-10-05 09:54:19.903970767 +0000 UTC m=+0.120371741 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 05:54:19 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:54:19 localhost ceph-mon[308154]: [05/Oct/2025:09:54:18] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:54:19 localhost ceph-mon[308154]: [05/Oct/2025:09:54:18] ENGINE Bus STARTED Oct 5 05:54:19 localhost ceph-mon[308154]: [05/Oct/2025:09:54:18] ENGINE Client ('172.18.0.108', 42012) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:54:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:54:20.453 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:54:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:54:20.455 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:54:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:54:20.456 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:21 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e83 _set_new_cache_sizes cache_size:1019823160 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:21 localhost podman[248506]: time="2025-10-05T09:54:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:54:21 localhost podman[248506]: @ - - [05/Oct/2025:09:54:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:54:21 localhost podman[248506]: @ - - [05/Oct/2025:09:54:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18823 "" "Go-http-client/1.1" Oct 5 05:54:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:54:21 localhost nova_compute[297021]: 2025-10-05 09:54:21.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:21 localhost podman[309469]: 2025-10-05 09:54:21.940758102 +0000 UTC m=+0.097135354 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:54:21 localhost podman[309469]: 2025-10-05 09:54:21.960845274 +0000 UTC m=+0.117222516 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git) Oct 5 05:54:21 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:54:21 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:54:21 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:21 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:54:21 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:21 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:54:21 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:21 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:22 localhost openstack_network_exporter[250601]: ERROR 09:54:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:54:22 localhost openstack_network_exporter[250601]: ERROR 09:54:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:22 localhost openstack_network_exporter[250601]: ERROR 09:54:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:22 localhost openstack_network_exporter[250601]: ERROR 09:54:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:54:22 localhost openstack_network_exporter[250601]: Oct 5 05:54:22 localhost openstack_network_exporter[250601]: ERROR 09:54:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:54:22 localhost openstack_network_exporter[250601]: Oct 5 05:54:22 localhost nova_compute[297021]: 2025-10-05 09:54:22.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:24 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:25 localhost ceph-mon[308154]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:25 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:25 localhost ceph-mon[308154]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 5 05:54:25 localhost ceph-mon[308154]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 5 05:54:25 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:25 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:25 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:25 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:26 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:26 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:26 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:26 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:26 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:26 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:26 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e83 _set_new_cache_sizes cache_size:1020050646 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.792843) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066792890, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1230, "num_deletes": 257, "total_data_size": 5356548, "memory_usage": 5745552, "flush_reason": "Manual Compaction"} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066824761, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3307863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10663, "largest_seqno": 11888, "table_properties": {"data_size": 3302081, "index_size": 2993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14128, "raw_average_key_size": 20, "raw_value_size": 3289681, "raw_average_value_size": 4880, "num_data_blocks": 125, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658051, "oldest_key_time": 1759658051, "file_creation_time": 1759658066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 31986 microseconds, and 9723 cpu microseconds. Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.824825) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3307863 bytes OK Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.824856) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.826956) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.826990) EVENT_LOG_v1 {"time_micros": 1759658066826984, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.827015) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5350009, prev total WAL file size 5350009, number of live WAL files 2. Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.828204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303038' seq:72057594037927935, type:22 .. '6B760031323633' seq:0, type:0; will stop at (end) Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3230KB)], [15(12MB)] Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066828284, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 16745933, "oldest_snapshot_seqno": -1} Oct 5 05:54:26 localhost nova_compute[297021]: 2025-10-05 09:54:26.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10129 keys, 15701613 bytes, temperature: kUnknown Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066977938, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 15701613, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15641460, "index_size": 33748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 271888, "raw_average_key_size": 26, "raw_value_size": 15465864, "raw_average_value_size": 1526, "num_data_blocks": 1283, "num_entries": 10129, "num_filter_entries": 10129, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.978253) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 15701613 bytes Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.979832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 104.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 12.8 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(9.8) write-amplify(4.7) OK, records in: 10662, records dropped: 533 output_compression: NoCompression Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.979861) EVENT_LOG_v1 {"time_micros": 1759658066979849, "job": 6, "event": "compaction_finished", "compaction_time_micros": 149737, "compaction_time_cpu_micros": 43680, "output_level": 6, "num_output_files": 1, "total_output_size": 15701613, "num_input_records": 10662, "num_output_records": 10129, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066980533, "job": 6, "event": "table_file_deletion", "file_number": 17} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066982341, "job": 6, "event": "table_file_deletion", "file_number": 15} Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.828105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:27 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:27 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:27 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:27 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:27 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:27 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:27 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:27 localhost nova_compute[297021]: 2025-10-05 09:54:27.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:54:28 localhost ceph-mon[308154]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:28 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:28 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:28 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:28 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:54:28 localhost podman[309896]: 2025-10-05 09:54:28.121640909 +0000 UTC m=+0.085509999 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:54:28 localhost podman[309904]: Oct 5 05:54:28 localhost podman[309904]: 2025-10-05 09:54:28.148625218 +0000 UTC m=+0.086661321 container create 26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_solomon, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Oct 5 05:54:28 localhost podman[309896]: 2025-10-05 09:54:28.163777157 +0000 UTC m=+0.127646167 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:54:28 localhost systemd[1]: Started libpod-conmon-26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db.scope. Oct 5 05:54:28 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:54:28 localhost systemd[1]: Started libcrun container. Oct 5 05:54:28 localhost podman[309904]: 2025-10-05 09:54:28.122557484 +0000 UTC m=+0.060593647 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:28 localhost podman[309904]: 2025-10-05 09:54:28.223687875 +0000 UTC m=+0.161724038 container init 26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_solomon, vendor=Red Hat, Inc., io.openshift.expose-services=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:54:28 localhost podman[309904]: 2025-10-05 09:54:28.232969045 +0000 UTC m=+0.171005178 container start 26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_solomon, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 05:54:28 localhost podman[309904]: 2025-10-05 09:54:28.233339485 +0000 UTC m=+0.171375628 container attach 26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_solomon, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.component=rhceph-container, version=7, ceph=True, release=553, name=rhceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Oct 5 05:54:28 localhost keen_solomon[309935]: 167 167 Oct 5 05:54:28 localhost systemd[1]: libpod-26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db.scope: Deactivated successfully. Oct 5 05:54:28 localhost podman[309904]: 2025-10-05 09:54:28.237195419 +0000 UTC m=+0.175231572 container died 26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_solomon, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 5 05:54:28 localhost podman[309940]: 2025-10-05 09:54:28.326587172 +0000 UTC m=+0.079544948 container remove 26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_solomon, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Oct 5 05:54:28 localhost systemd[1]: libpod-conmon-26e863e5c2f9362225f608f3a77d2ef728fce7f7e65620382db4d4ffdacf72db.scope: Deactivated successfully. Oct 5 05:54:29 localhost systemd[1]: var-lib-containers-storage-overlay-cafbee31d12aeecc107a6a3563ead991900173b639d36cf8920a7577a1684995-merged.mount: Deactivated successfully. Oct 5 05:54:29 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:29 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:54:29 localhost podman[310015]: Oct 5 05:54:29 localhost podman[310015]: 2025-10-05 09:54:29.214570698 +0000 UTC m=+0.080613806 container create 454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_leakey, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, ceph=True, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=553, build-date=2025-09-24T08:57:55, version=7, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:54:29 localhost systemd[1]: Started libpod-conmon-454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef.scope. Oct 5 05:54:29 localhost podman[310015]: 2025-10-05 09:54:29.183672534 +0000 UTC m=+0.049715692 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:29 localhost systemd[1]: Started libcrun container. Oct 5 05:54:29 localhost podman[310015]: 2025-10-05 09:54:29.30793801 +0000 UTC m=+0.173981128 container init 454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_leakey, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:54:29 localhost podman[310015]: 2025-10-05 09:54:29.319188754 +0000 UTC m=+0.185231872 container start 454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_leakey, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64) Oct 5 05:54:29 localhost podman[310015]: 2025-10-05 09:54:29.319543893 +0000 UTC m=+0.185587001 container attach 454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_leakey, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=) Oct 5 05:54:29 localhost gallant_leakey[310030]: 167 167 Oct 5 05:54:29 localhost systemd[1]: libpod-454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef.scope: Deactivated successfully. Oct 5 05:54:29 localhost podman[310015]: 2025-10-05 09:54:29.322354029 +0000 UTC m=+0.188397167 container died 454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_leakey, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64) Oct 5 05:54:29 localhost podman[310035]: 2025-10-05 09:54:29.417311993 +0000 UTC m=+0.086736173 container remove 454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_leakey, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:54:29 localhost systemd[1]: libpod-conmon-454e085453c0b5423de6a13699cb8ccbe212addd5238eb4b4a1a8720be29cdef.scope: Deactivated successfully. Oct 5 05:54:30 localhost systemd[1]: tmp-crun.zQ7sMV.mount: Deactivated successfully. Oct 5 05:54:30 localhost systemd[1]: var-lib-containers-storage-overlay-bd69a0a6753b2425f5dd7e04bcd088cb3c5ea91da38cbdc37b25b40dff2688c0-merged.mount: Deactivated successfully. Oct 5 05:54:30 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:30 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:54:31 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e83 _set_new_cache_sizes cache_size:1020054669 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:31 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:31 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:54:31 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:31 localhost ceph-mon[308154]: Saving service mon spec with placement label:mon Oct 5 05:54:31 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:54:31 localhost systemd[1]: tmp-crun.LEqJ3h.mount: Deactivated successfully. Oct 5 05:54:31 localhost podman[310058]: 2025-10-05 09:54:31.679739818 +0000 UTC m=+0.084428610 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:54:31 localhost podman[310058]: 2025-10-05 09:54:31.715548865 +0000 UTC m=+0.120237677 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:54:31 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:54:31 localhost nova_compute[297021]: 2025-10-05 09:54:31.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:32 localhost nova_compute[297021]: 2025-10-05 09:54:32.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[308154]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:32 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:32 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:33 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:54:33 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:54:33 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:33 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:33 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:54:34 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:54:34 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:54:34 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:54:35 localhost podman[310081]: 2025-10-05 09:54:35.693356108 +0000 UTC m=+0.097668518 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:54:35 localhost podman[310081]: 2025-10-05 09:54:35.733931654 +0000 UTC m=+0.138244054 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:54:35 localhost podman[310082]: 2025-10-05 09:54:35.737234903 +0000 UTC m=+0.139754874 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd) Oct 5 05:54:35 localhost podman[310082]: 2025-10-05 09:54:35.774236092 +0000 UTC m=+0.176756113 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:54:35 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:54:35 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:54:35 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:54:35 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:54:36 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:36 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:54:36 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:36 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:36 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:54:36 localhost nova_compute[297021]: 2025-10-05 09:54:36.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:37 localhost nova_compute[297021]: 2025-10-05 09:54:37.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:37 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.836 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.838 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.862 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a11dc29e-3ef7-4329-bacb-069addbca54f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.838283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eac2e6c-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '8ef916f32bea770d61f668622c38aacc973ba2aecb1d8574be4fa7aff0b4825a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.838283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eac43a2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '619d998d4842531faae9db75368b3adb2eccffa78f465eed79ea20dda048ec6a'}]}, 'timestamp': '2025-10-05 09:54:38.863697', '_unique_id': '6dcc1bb19b7b4a9283418344cfadbb51'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.866 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.866 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.867 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1f157799-9574-4228-91ef-064b8f079feb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.866903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eacd344-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': 'a04af7faa9aa158ed3e129d6a49e094c5084a8ed6823388fabbaec62b2d6a69e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.866903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eace55a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': 'cc2735077c503889bf435247eba1cbe7ebdf222051f5479bbb059be706374d93'}]}, 'timestamp': '2025-10-05 09:54:38.867797', '_unique_id': '6d6f0f232bc84c6b9ff6e5997caafc07'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.869 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.873 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '680f7bea-24b8-4a4b-b920-5a4fb9a5123a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.870051', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eade6da-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '5e1e71b38820d9a2e9be9224fb0da6b19d8504c535935be70a75d659ea077fca'}]}, 'timestamp': '2025-10-05 09:54:38.874453', '_unique_id': '44a43a9457d24982b08514284d041e36'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.875 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.876 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.876 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '189ba17a-beb5-412e-a02f-e175af08332e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:54:38.876709', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4eb127e6-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.11902283, 'message_signature': '29f62f255f45cece599a65919285f3c2ef4949ca87e17a2b925d2ad95282ca98'}]}, 'timestamp': '2025-10-05 09:54:38.895735', '_unique_id': '51791a5fc1444914a0e384dba2ded386'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.898 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.911 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.911 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f2266c14-e3b1-4e08-9649-4a9f611240d8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.898864', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eb3a03e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.122777691, 'message_signature': '014f57506a4ef421d160189cc4eb705fd59710d6e9b2552b80062d5fb5b77c31'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.898864', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eb3b15a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.122777691, 'message_signature': '384026d1d7c4ebdb209bc88ff22c58b41128dcd75767b5e9c107367a35eb7449'}]}, 'timestamp': '2025-10-05 09:54:38.912340', '_unique_id': '7e1d648dce4a48b89ae7fb24fad8bf9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.914 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.914 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 13080000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '215e5145-64bf-4374-9983-299bd7458610', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13080000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:54:38.915008', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '4eb42b12-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.11902283, 'message_signature': '11534b624c7f5b4e1969736082230e9a821a5e62f5425f9dec555a098851fd00'}]}, 'timestamp': '2025-10-05 09:54:38.915527', '_unique_id': '31e8db42c9ce462681574b57669015b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.917 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.918 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceph-mon[308154]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:54:38 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:54:38 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:38 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62854550-57a5-4c06-aa57-f7750a680732', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.917894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eb49c5a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.122777691, 'message_signature': 'e8ff4a7394153282e92441291aab2c95e86df80e88bb965075a9daa02f658421'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.917894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eb4aea2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.122777691, 'message_signature': '97a29aa77ab8a23c6379f04423ca339ca500fd4006cef71a634fa3f13645a4c8'}]}, 'timestamp': '2025-10-05 09:54:38.918834', '_unique_id': '321cedf7afa443cb8a2d6155de3a1083'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.920 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.921 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.921 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e3ae5f68-5275-4f39-b7ab-8c15102aa92c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.921135', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eb51a68-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '8e531a24f0a36dcbd26558dce0a5d0fcb60193a0ebd41c623d5248348805727b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.921135', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eb52c24-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': 'fbb030ee6b761148ea7823fee6f8e9a255d64b7b653141553f98b448c5d5fee0'}]}, 'timestamp': '2025-10-05 09:54:38.922033', '_unique_id': 'a8690792b05247aca598c7e19dc2c066'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.923 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.924 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.924 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '17a827cf-cf78-40ef-b09f-41027a378357', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.924440', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb59b14-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '28547d2ea1da1f825432e4eb959584958b3fdb31efcc2dcace9baeaa953a2023'}]}, 'timestamp': '2025-10-05 09:54:38.924905', '_unique_id': '204c4ef682254833ac197580075f5728'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.927 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd962a677-4277-4717-aea5-8ecc53bdff7e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.927300', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eb60ba8-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '570a6e7ecce508fe436d036b3c62aa0378dca99e7261aea567d9579c114d70dc'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.927300', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eb61b7a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '42acbb03d5f5c65ae85c58fc97571ea7fa8937e859208e7437a06c08f41234b6'}]}, 'timestamp': '2025-10-05 09:54:38.928161', '_unique_id': '4b8b8b8ab6a244a98e4491ec53012a56'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.930 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.930 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.930 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.930 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af80393f-7597-4d27-9d31-ae83ec37fd43', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.930528', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eb68970-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.122777691, 'message_signature': 'cd97af454e061349cd2724c047fbd92bfb56449d1ab6ca41541d1b3a7628f198'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.930528', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eb699b0-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.122777691, 'message_signature': '62d525dc9c16291bfc87a4598430e55b0b29f7436c10bb1966b5da2c0f123c95'}]}, 'timestamp': '2025-10-05 09:54:38.931433', '_unique_id': '1ee86987ba024cbda914f0a6559e81f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.933 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.933 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cd012489-23e7-4390-8f11-9329c6c40c15', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.933638', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb701e8-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '8605364e02f8aa56c5135c3258e5c15a4c7d1af6f76102560de226977bdcb56d'}]}, 'timestamp': '2025-10-05 09:54:38.934088', '_unique_id': '3802c6944dff4ce298408fa6b1894180'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.936 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e7e85256-9dd0-4c2d-a280-3411eb1eac3f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.936149', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb763fe-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '5ae483ecd2a4a0efa8c0e611aa981184ced62ed403ac2d95504c3ba5806c98c2'}]}, 'timestamp': '2025-10-05 09:54:38.936634', '_unique_id': '39f9cc07a9fe459497b99afb85de59b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6cf73a27-768e-4914-8a70-d0def1be66ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.938770', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb7ca4c-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': 'f3c9bc97fffbc33aa5426bfc49e2f33504e10fc4285115a2ef89b9f3c456f255'}]}, 'timestamp': '2025-10-05 09:54:38.939220', '_unique_id': '25170babc9794f868ebcbb3588a531df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05e1b5a4-b871-42cd-aba7-0491ce75f00f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.941324', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb8302c-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '2bfbdd362da4ec5c4b7c0c99fd2995361fbce45cd6cf1893388eaf5cb7dee854'}]}, 'timestamp': '2025-10-05 09:54:38.941829', '_unique_id': '6c4906b8ed874296a647769252d3751f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df21cfa6-f1ac-42db-8fba-23ea16e9b9f0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.943905', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb892ce-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': 'c2632ca2e6a4a10d1831eedf638cda9b597fb74589e9f0705c31477d26db8c85'}]}, 'timestamp': '2025-10-05 09:54:38.944351', '_unique_id': '821070aeb75f4887980c09e8971273f0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.946 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7583563d-5f75-4a53-a3b7-0bd20cf2c372', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.946447', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb8f656-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': 'bbd9053771ed04b8acfc0278d7468ca7d880cca538db7987940bf717f4091e82'}]}, 'timestamp': '2025-10-05 09:54:38.946897', '_unique_id': 'e65f21de744942b1bf42e1c2b3c85a91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.948 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7094b781-9da1-43c1-a8cc-6bf2bc38a744', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.948950', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb957ea-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '8d840812ec5520f1949845e3d6c6503eb72f9173695247522797a0f6c5e7103f'}]}, 'timestamp': '2025-10-05 09:54:38.949458', '_unique_id': '931119d32b8142dd87d96fc270dc3f4e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '99de5e49-27ed-4ca7-a2ec-45fa804efbd2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:54:38.951250', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4eb9ae84-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.093962663, 'message_signature': '7adfa9c06332b7b8f6acfffa0c5578b904335d6998ec717f76505ab36579446b'}]}, 'timestamp': '2025-10-05 09:54:38.951550', '_unique_id': '95b5cd25bbb54e33bd98a42c6765045e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e384418d-7971-458f-9560-de73d4241d55', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.952894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eb9ee94-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '8313b69b1e67462ea47cb56216bc6d1119cc0affae52ad2fa543d345691def10'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.952894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eb9f902-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '933cd335349989a6687d7b561c0b51effe7a00397c868d3bc04679df1f361166'}]}, 'timestamp': '2025-10-05 09:54:38.953443', '_unique_id': '01ad5545260f47809e744507e204bebb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4c9dce87-9f42-4e05-a5d5-4ec44517360d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:54:38.954839', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4eba3ab6-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': '01abfdcc81093a4e20ce0559bee358a05e012fff99c5ff58342cc50cb84a47a7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:54:38.954839', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4eba4448-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11558.062195266, 'message_signature': 'b1b98a7701ba9faef3d2852806e6b37b416baa0a98719efde88c5489e07a7d55'}]}, 'timestamp': '2025-10-05 09:54:38.955343', '_unique_id': '5819f5cae5944e0bacd04430987f1ccc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:54:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:54:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:54:39 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:40 localhost ceph-mon[308154]: Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:54:40 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:54:40 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:40 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:40 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:41 localhost ceph-mon[308154]: mon.np0005471150@4(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:41 localhost systemd[1]: session-69.scope: Deactivated successfully. Oct 5 05:54:41 localhost systemd[1]: session-69.scope: Consumed 1.832s CPU time. Oct 5 05:54:41 localhost systemd-logind[760]: Session 69 logged out. Waiting for processes to exit. Oct 5 05:54:41 localhost systemd-logind[760]: Removed session 69. Oct 5 05:54:41 localhost podman[310188]: Oct 5 05:54:41 localhost podman[310188]: 2025-10-05 09:54:41.755030398 +0000 UTC m=+0.091839251 container create 8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_banach, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, distribution-scope=public, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:54:41 localhost systemd[1]: Started libpod-conmon-8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e.scope. Oct 5 05:54:41 localhost podman[310188]: 2025-10-05 09:54:41.719991092 +0000 UTC m=+0.056799945 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:41 localhost systemd[1]: Started libcrun container. Oct 5 05:54:41 localhost podman[310188]: 2025-10-05 09:54:41.837382151 +0000 UTC m=+0.174190994 container init 8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_banach, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container) Oct 5 05:54:41 localhost podman[310188]: 2025-10-05 09:54:41.855543262 +0000 UTC m=+0.192352105 container start 8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_banach, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Oct 5 05:54:41 localhost podman[310188]: 2025-10-05 09:54:41.856302002 +0000 UTC m=+0.193110895 container attach 8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_banach, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=553) Oct 5 05:54:41 localhost elated_banach[310203]: 167 167 Oct 5 05:54:41 localhost systemd[1]: libpod-8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e.scope: Deactivated successfully. Oct 5 05:54:41 localhost podman[310188]: 2025-10-05 09:54:41.860582998 +0000 UTC m=+0.197391861 container died 8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_banach, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12) Oct 5 05:54:41 localhost nova_compute[297021]: 2025-10-05 09:54:41.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:42 localhost podman[310208]: 2025-10-05 09:54:42.010735492 +0000 UTC m=+0.136798005 container remove 8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_banach, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, release=553, io.buildah.version=1.33.12, vcs-type=git) Oct 5 05:54:42 localhost systemd[1]: libpod-conmon-8d636699b5a8ad7db653387253f7568e120c0a304da2a51b80eb91b533fa605e.scope: Deactivated successfully. Oct 5 05:54:42 localhost ceph-mon[308154]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:42 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:42 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:42 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:42 localhost ceph-mon[308154]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:42 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:42 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:42 localhost ceph-mon[308154]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:42 localhost nova_compute[297021]: 2025-10-05 09:54:42.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:42 localhost systemd[1]: tmp-crun.tQPkFY.mount: Deactivated successfully. Oct 5 05:54:42 localhost systemd[1]: var-lib-containers-storage-overlay-d044af2afe53b5821859d4c959d14e748cdb3e49d7265f6ae5ef11dece136dff-merged.mount: Deactivated successfully. Oct 5 05:54:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:54:42 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b017080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Oct 5 05:54:42 localhost ceph-mon[308154]: mon.np0005471150@4(peon) e10 my rank is now 3 (was 4) Oct 5 05:54:42 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:54:42 localhost ceph-mon[308154]: paxos.3).electionLogic(36) init, last seen epoch 36 Oct 5 05:54:42 localhost ceph-mon[308154]: mon.np0005471150@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:42 localhost ceph-mon[308154]: mon.np0005471150@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:42 localhost ceph-mon[308154]: mon.np0005471150@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:42 localhost systemd[1]: tmp-crun.NfWpW5.mount: Deactivated successfully. Oct 5 05:54:42 localhost podman[310224]: 2025-10-05 09:54:42.912619543 +0000 UTC m=+0.119250060 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:54:42 localhost podman[310224]: 2025-10-05 09:54:42.924929496 +0000 UTC m=+0.131560063 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:54:42 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471150@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471150@3(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:44 localhost ceph-mon[308154]: Remove daemons mon.np0005471147 Oct 5 05:54:44 localhost ceph-mon[308154]: Safe to remove mon.np0005471147: new quorum should be ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150']) Oct 5 05:54:44 localhost ceph-mon[308154]: Removing monitor np0005471147 from monmap... Oct 5 05:54:44 localhost ceph-mon[308154]: Removing daemon mon.np0005471147 from np0005471147.localdomain -- ports [] Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471148 calling monitor election Oct 5 05:54:44 localhost ceph-mon[308154]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3) Oct 5 05:54:44 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:44 localhost ceph-mon[308154]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[308154]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[308154]: stray daemon mgr.np0005471146.xqzesq on host np0005471146.localdomain not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[308154]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[308154]: stray host np0005471146.localdomain has 1 stray daemons: ['mgr.np0005471146.xqzesq'] Oct 5 05:54:45 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:46 localhost ceph-mon[308154]: mon.np0005471150@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:54:46 localhost nova_compute[297021]: 2025-10-05 09:54:46.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:47 localhost podman[310580]: 2025-10-05 09:54:47.013951991 +0000 UTC m=+0.090488844 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:54:47 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[308154]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:47 localhost podman[310580]: 2025-10-05 09:54:47.057129288 +0000 UTC m=+0.133666171 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:54:47 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:54:47 localhost nova_compute[297021]: 2025-10-05 09:54:47.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:48 localhost ceph-mon[308154]: Removed label mon from host np0005471147.localdomain Oct 5 05:54:48 localhost ceph-mon[308154]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:48 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:48 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:48 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:48 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:48 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:49 localhost ceph-mon[308154]: Removed label mgr from host np0005471147.localdomain Oct 5 05:54:49 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:49 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:50 localhost ceph-mon[308154]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:50 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:50 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:50 localhost ceph-mon[308154]: Removed label _admin from host np0005471147.localdomain Oct 5 05:54:50 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:50 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:50 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:54:50 localhost podman[310605]: 2025-10-05 09:54:50.656744101 +0000 UTC m=+0.066857196 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 05:54:50 localhost podman[310605]: 2025-10-05 09:54:50.696914126 +0000 UTC m=+0.107027171 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:54:50 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:54:51 localhost ceph-mon[308154]: mon.np0005471150@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:51 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:51 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:51 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:51 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:51 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:51 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 5 05:54:51 localhost systemd[306069]: Activating special unit Exit the Session... Oct 5 05:54:51 localhost systemd[306069]: Stopped target Main User Target. Oct 5 05:54:51 localhost systemd[306069]: Stopped target Basic System. Oct 5 05:54:51 localhost systemd[306069]: Stopped target Paths. Oct 5 05:54:51 localhost systemd[306069]: Stopped target Sockets. Oct 5 05:54:51 localhost systemd[306069]: Stopped target Timers. Oct 5 05:54:51 localhost systemd[306069]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 05:54:51 localhost systemd[306069]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:54:51 localhost systemd[306069]: Closed D-Bus User Message Bus Socket. Oct 5 05:54:51 localhost systemd[306069]: Stopped Create User's Volatile Files and Directories. Oct 5 05:54:51 localhost systemd[306069]: Removed slice User Application Slice. Oct 5 05:54:51 localhost systemd[306069]: Reached target Shutdown. Oct 5 05:54:51 localhost systemd[306069]: Finished Exit the Session. Oct 5 05:54:51 localhost systemd[306069]: Reached target Exit the Session. Oct 5 05:54:51 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 5 05:54:51 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 5 05:54:51 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 5 05:54:51 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 5 05:54:51 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 5 05:54:51 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 5 05:54:51 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 5 05:54:51 localhost systemd[1]: user-1003.slice: Consumed 2.515s CPU time. Oct 5 05:54:51 localhost podman[248506]: time="2025-10-05T09:54:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:54:51 localhost podman[248506]: @ - - [05/Oct/2025:09:54:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:54:51 localhost podman[248506]: @ - - [05/Oct/2025:09:54:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18828 "" "Go-http-client/1.1" Oct 5 05:54:51 localhost nova_compute[297021]: 2025-10-05 09:54:51.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:52 localhost openstack_network_exporter[250601]: ERROR 09:54:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:52 localhost openstack_network_exporter[250601]: ERROR 09:54:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:52 localhost openstack_network_exporter[250601]: ERROR 09:54:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:54:52 localhost openstack_network_exporter[250601]: ERROR 09:54:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:54:52 localhost openstack_network_exporter[250601]: Oct 5 05:54:52 localhost openstack_network_exporter[250601]: ERROR 09:54:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:54:52 localhost openstack_network_exporter[250601]: Oct 5 05:54:52 localhost podman[310677]: Oct 5 05:54:52 localhost podman[310677]: 2025-10-05 09:54:52.068004355 +0000 UTC m=+0.075284684 container create 4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamport, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, RELEASE=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc.) Oct 5 05:54:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:54:52 localhost systemd[1]: Started libpod-conmon-4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca.scope. Oct 5 05:54:52 localhost systemd[1]: Started libcrun container. Oct 5 05:54:52 localhost podman[310677]: 2025-10-05 09:54:52.137583714 +0000 UTC m=+0.144864043 container init 4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamport, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Oct 5 05:54:52 localhost podman[310677]: 2025-10-05 09:54:52.044536822 +0000 UTC m=+0.051817171 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:52 localhost systemd[1]: tmp-crun.EZrlYx.mount: Deactivated successfully. Oct 5 05:54:52 localhost podman[310677]: 2025-10-05 09:54:52.1537218 +0000 UTC m=+0.161002149 container start 4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamport, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, version=7) Oct 5 05:54:52 localhost podman[310677]: 2025-10-05 09:54:52.154042639 +0000 UTC m=+0.161322998 container attach 4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamport, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:54:52 localhost eager_lamport[310693]: 167 167 Oct 5 05:54:52 localhost systemd[1]: libpod-4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca.scope: Deactivated successfully. Oct 5 05:54:52 localhost podman[310677]: 2025-10-05 09:54:52.159010283 +0000 UTC m=+0.166290662 container died 4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamport, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7) Oct 5 05:54:52 localhost ceph-mon[308154]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:52 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:52 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:52 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:52 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:54:52 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:52 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:54:52 localhost podman[310692]: 2025-10-05 09:54:52.208978652 +0000 UTC m=+0.098182752 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6) Oct 5 05:54:52 localhost podman[310692]: 2025-10-05 09:54:52.251696025 +0000 UTC m=+0.140900086 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible) Oct 5 05:54:52 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:54:52 localhost podman[310708]: 2025-10-05 09:54:52.277083781 +0000 UTC m=+0.101771139 container remove 4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_lamport, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main) Oct 5 05:54:52 localhost systemd[1]: libpod-conmon-4dec2c8605b31c7dc4b3001de2ab1f46ee1329f8a87f9b219d493a520f331bca.scope: Deactivated successfully. Oct 5 05:54:52 localhost nova_compute[297021]: 2025-10-05 09:54:52.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:52 localhost podman[310787]: Oct 5 05:54:52 localhost podman[310787]: 2025-10-05 09:54:52.995087117 +0000 UTC m=+0.076465835 container create 8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_khorana, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, version=7, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Oct 5 05:54:53 localhost systemd[1]: Started libpod-conmon-8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650.scope. Oct 5 05:54:53 localhost systemd[1]: Started libcrun container. Oct 5 05:54:53 localhost podman[310787]: 2025-10-05 09:54:53.053205247 +0000 UTC m=+0.134583955 container init 8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_khorana, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, version=7, GIT_CLEAN=True, release=553, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux ) Oct 5 05:54:53 localhost podman[310787]: 2025-10-05 09:54:52.964781679 +0000 UTC m=+0.046160417 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:53 localhost systemd[1]: var-lib-containers-storage-overlay-50b82a537e26b12c632f96bd6951fb90f5a4d6fd57e2460bf45570a20e608009-merged.mount: Deactivated successfully. Oct 5 05:54:53 localhost podman[310787]: 2025-10-05 09:54:53.086151006 +0000 UTC m=+0.167529724 container start 8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_khorana, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Oct 5 05:54:53 localhost podman[310787]: 2025-10-05 09:54:53.086424684 +0000 UTC m=+0.167803392 container attach 8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_khorana, GIT_CLEAN=True, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container) Oct 5 05:54:53 localhost optimistic_khorana[310803]: 167 167 Oct 5 05:54:53 localhost systemd[1]: libpod-8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650.scope: Deactivated successfully. Oct 5 05:54:53 localhost podman[310787]: 2025-10-05 09:54:53.091155032 +0000 UTC m=+0.172533770 container died 8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_khorana, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Oct 5 05:54:53 localhost systemd[1]: var-lib-containers-storage-overlay-fa96ca9f2a454fa01aac3e0739109224cc59c2784211c5302c2eb9a098f6547d-merged.mount: Deactivated successfully. Oct 5 05:54:53 localhost podman[310808]: 2025-10-05 09:54:53.187311378 +0000 UTC m=+0.083956138 container remove 8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_khorana, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, GIT_CLEAN=True, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=) Oct 5 05:54:53 localhost systemd[1]: libpod-conmon-8a652e1f139c1243778b81e20878ef94de470322058f71fdaabbb8b4dbbd3650.scope: Deactivated successfully. Oct 5 05:54:53 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:53 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:53 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:54:53 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:54:53 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:54 localhost podman[310886]: Oct 5 05:54:54 localhost podman[310886]: 2025-10-05 09:54:54.055298194 +0000 UTC m=+0.080358641 container create 795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_shamir, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Oct 5 05:54:54 localhost systemd[1]: Started libpod-conmon-795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b.scope. Oct 5 05:54:54 localhost systemd[1]: Started libcrun container. Oct 5 05:54:54 localhost podman[310886]: 2025-10-05 09:54:54.117101313 +0000 UTC m=+0.142161760 container init 795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_shamir, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55) Oct 5 05:54:54 localhost podman[310886]: 2025-10-05 09:54:54.021933293 +0000 UTC m=+0.046993810 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:54 localhost podman[310886]: 2025-10-05 09:54:54.126878666 +0000 UTC m=+0.151939123 container start 795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_shamir, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, release=553, io.openshift.expose-services=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc.) Oct 5 05:54:54 localhost podman[310886]: 2025-10-05 09:54:54.127091562 +0000 UTC m=+0.152152009 container attach 795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_shamir, description=Red Hat Ceph Storage 7, name=rhceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, release=553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True) Oct 5 05:54:54 localhost focused_shamir[310900]: 167 167 Oct 5 05:54:54 localhost systemd[1]: libpod-795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b.scope: Deactivated successfully. Oct 5 05:54:54 localhost podman[310886]: 2025-10-05 09:54:54.131181683 +0000 UTC m=+0.156242210 container died 795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_shamir, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, ceph=True, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git) Oct 5 05:54:54 localhost podman[310905]: 2025-10-05 09:54:54.227429331 +0000 UTC m=+0.083726132 container remove 795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_shamir, io.buildah.version=1.33.12, version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph) Oct 5 05:54:54 localhost systemd[1]: libpod-conmon-795a90f03ed42e27ec8eb9e3ee08f7e63dcc1356c81b8b268a476e433db2411b.scope: Deactivated successfully. Oct 5 05:54:54 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:54 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:54 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:54:54 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:54:54 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:55 localhost podman[310982]: Oct 5 05:54:55 localhost podman[310982]: 2025-10-05 09:54:55.043235738 +0000 UTC m=+0.083869395 container create 5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_wing, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container) Oct 5 05:54:55 localhost systemd[1]: Started libpod-conmon-5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886.scope. Oct 5 05:54:55 localhost systemd[1]: var-lib-containers-storage-overlay-09073eec095363e8a1f7d7c83eee61ca61d560f9c2bcb04f76a467be63d7bed1-merged.mount: Deactivated successfully. Oct 5 05:54:55 localhost podman[310982]: 2025-10-05 09:54:55.004959435 +0000 UTC m=+0.045593122 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:55 localhost systemd[1]: Started libcrun container. Oct 5 05:54:55 localhost podman[310982]: 2025-10-05 09:54:55.127524594 +0000 UTC m=+0.168158251 container init 5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_wing, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-09-24T08:57:55, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Oct 5 05:54:55 localhost systemd[1]: tmp-crun.JpPxBu.mount: Deactivated successfully. Oct 5 05:54:55 localhost podman[310982]: 2025-10-05 09:54:55.147054071 +0000 UTC m=+0.187687738 container start 5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_wing, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553) Oct 5 05:54:55 localhost sharp_wing[310997]: 167 167 Oct 5 05:54:55 localhost podman[310982]: 2025-10-05 09:54:55.149551629 +0000 UTC m=+0.190185286 container attach 5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_wing, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, name=rhceph, maintainer=Guillaume Abrioux ) Oct 5 05:54:55 localhost systemd[1]: libpod-5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886.scope: Deactivated successfully. Oct 5 05:54:55 localhost podman[310982]: 2025-10-05 09:54:55.150736461 +0000 UTC m=+0.191370138 container died 5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_wing, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 05:54:55 localhost podman[311002]: 2025-10-05 09:54:55.247431222 +0000 UTC m=+0.086891917 container remove 5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_wing, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 05:54:55 localhost systemd[1]: libpod-conmon-5366851cad508a979451792a13d29927d0e5f9c7209425abfa77c67b16c9e886.scope: Deactivated successfully. Oct 5 05:54:55 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:54:55 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:55 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:54:55 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:55 localhost podman[311072]: Oct 5 05:54:55 localhost podman[311072]: 2025-10-05 09:54:55.942759696 +0000 UTC m=+0.075943622 container create f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_easley, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git) Oct 5 05:54:55 localhost systemd[1]: Started libpod-conmon-f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d.scope. Oct 5 05:54:55 localhost systemd[1]: Started libcrun container. Oct 5 05:54:56 localhost podman[311072]: 2025-10-05 09:54:56.010497086 +0000 UTC m=+0.143681012 container init f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_easley, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 5 05:54:56 localhost podman[311072]: 2025-10-05 09:54:55.912491269 +0000 UTC m=+0.045675245 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:56 localhost podman[311072]: 2025-10-05 09:54:56.019904169 +0000 UTC m=+0.153088105 container start f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_easley, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.buildah.version=1.33.12, version=7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Oct 5 05:54:56 localhost podman[311072]: 2025-10-05 09:54:56.020185497 +0000 UTC m=+0.153369423 container attach f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_easley, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux ) Oct 5 05:54:56 localhost serene_easley[311087]: 167 167 Oct 5 05:54:56 localhost systemd[1]: libpod-f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d.scope: Deactivated successfully. Oct 5 05:54:56 localhost podman[311072]: 2025-10-05 09:54:56.023361822 +0000 UTC m=+0.156545798 container died f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_easley, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:54:56 localhost ceph-mon[308154]: mon.np0005471150@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:56 localhost systemd[1]: var-lib-containers-storage-overlay-9f4d7ae73f29aa7eba4943a8759e4b5dcf43cdd2d295feb5d3aba3e773ae1d07-merged.mount: Deactivated successfully. Oct 5 05:54:56 localhost systemd[1]: var-lib-containers-storage-overlay-49cefaf40f4fc46d96d0feafc7c81586577ad44f1803d9a56fa5f956e2965a8d-merged.mount: Deactivated successfully. Oct 5 05:54:56 localhost podman[311092]: 2025-10-05 09:54:56.126274291 +0000 UTC m=+0.094359609 container remove f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_easley, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, release=553, version=7, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 05:54:56 localhost systemd[1]: libpod-conmon-f8b6944b8252db4be8f0a9a27d97bbb80e270172f42e28c156badc1630b6408d.scope: Deactivated successfully. Oct 5 05:54:56 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:54:56 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:54:56 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:56 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:56 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:56 localhost podman[311161]: Oct 5 05:54:56 localhost podman[311161]: 2025-10-05 09:54:56.839770006 +0000 UTC m=+0.077340889 container create d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, release=553, version=7, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Oct 5 05:54:56 localhost systemd[1]: Started libpod-conmon-d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada.scope. Oct 5 05:54:56 localhost systemd[1]: Started libcrun container. Oct 5 05:54:56 localhost podman[311161]: 2025-10-05 09:54:56.808349877 +0000 UTC m=+0.045920800 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:56 localhost podman[311161]: 2025-10-05 09:54:56.911747719 +0000 UTC m=+0.149318602 container init d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:56 localhost podman[311161]: 2025-10-05 09:54:56.921926014 +0000 UTC m=+0.159496897 container start d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, version=7) Oct 5 05:54:56 localhost podman[311161]: 2025-10-05 09:54:56.922201352 +0000 UTC m=+0.159772245 container attach d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, architecture=x86_64, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux ) Oct 5 05:54:56 localhost recursing_mclean[311176]: 167 167 Oct 5 05:54:56 localhost systemd[1]: libpod-d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada.scope: Deactivated successfully. Oct 5 05:54:56 localhost podman[311161]: 2025-10-05 09:54:56.925390418 +0000 UTC m=+0.162961341 container died d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, name=rhceph, ceph=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, release=553) Oct 5 05:54:56 localhost nova_compute[297021]: 2025-10-05 09:54:56.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:57 localhost podman[311181]: 2025-10-05 09:54:57.024314039 +0000 UTC m=+0.086234359 container remove d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_mclean, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main) Oct 5 05:54:57 localhost systemd[1]: libpod-conmon-d7d2a5808a4bd57cf4ac5166341abcd8d570243255340eeed30cfa1745d4eada.scope: Deactivated successfully. Oct 5 05:54:57 localhost systemd[1]: var-lib-containers-storage-overlay-1d24dc3e61c8fab4917823195bef730f2a3395d341827272d31b0cd3aac63472-merged.mount: Deactivated successfully. Oct 5 05:54:57 localhost nova_compute[297021]: 2025-10-05 09:54:57.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:54:58 localhost ceph-mon[308154]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:58 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:58 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:54:58 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:58 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:54:58 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:54:58 localhost podman[311199]: 2025-10-05 09:54:58.678211355 +0000 UTC m=+0.083509346 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:54:58 localhost podman[311199]: 2025-10-05 09:54:58.692770008 +0000 UTC m=+0.098068029 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:54:58 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:54:59 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:54:59 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:59 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:59 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:59 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:55:00 localhost ceph-mon[308154]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:55:00 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:00 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:00 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:55:00 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:00 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:55:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5326 writes, 23K keys, 5326 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5326 writes, 749 syncs, 7.11 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 121 writes, 412 keys, 121 commit groups, 1.0 writes per commit group, ingest: 0.69 MB, 0.00 MB/s#012Interval WAL: 121 writes, 48 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:55:01 localhost ceph-mon[308154]: mon.np0005471150@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:01 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[308154]: Added label _no_schedule to host np0005471147.localdomain Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[308154]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471147.localdomain Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:55:02 localhost nova_compute[297021]: 2025-10-05 09:55:01.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.498776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102498823, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1826, "num_deletes": 253, "total_data_size": 4021554, "memory_usage": 4068272, "flush_reason": "Manual Compaction"} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102517799, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 2331910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11893, "largest_seqno": 13714, "table_properties": {"data_size": 2323975, "index_size": 4568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 20602, "raw_average_key_size": 22, "raw_value_size": 2306611, "raw_average_value_size": 2515, "num_data_blocks": 198, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658067, "oldest_key_time": 1759658067, "file_creation_time": 1759658102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 19070 microseconds, and 7290 cpu microseconds. Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.517848) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 2331910 bytes OK Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.517874) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.519688) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.519709) EVENT_LOG_v1 {"time_micros": 1759658102519703, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.519732) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 4012431, prev total WAL file size 4028565, number of live WAL files 2. Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.520634) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(2277KB)], [18(14MB)] Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102520708, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18033523, "oldest_snapshot_seqno": -1} Oct 5 05:55:02 localhost nova_compute[297021]: 2025-10-05 09:55:02.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10503 keys, 14372263 bytes, temperature: kUnknown Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102640144, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14372263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14309887, "index_size": 35011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26309, "raw_key_size": 281367, "raw_average_key_size": 26, "raw_value_size": 14128081, "raw_average_value_size": 1345, "num_data_blocks": 1336, "num_entries": 10503, "num_filter_entries": 10503, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.640620) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14372263 bytes Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.642292) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.8 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 15.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(13.9) write-amplify(6.2) OK, records in: 11046, records dropped: 543 output_compression: NoCompression Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.642323) EVENT_LOG_v1 {"time_micros": 1759658102642309, "job": 8, "event": "compaction_finished", "compaction_time_micros": 119618, "compaction_time_cpu_micros": 42100, "output_level": 6, "num_output_files": 1, "total_output_size": 14372263, "num_input_records": 11046, "num_output_records": 10503, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102642919, "job": 8, "event": "table_file_deletion", "file_number": 20} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102645421, "job": 8, "event": "table_file_deletion", "file_number": 18} Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.520528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.645531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.645540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.645543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.645546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:55:02.645549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost podman[311222]: 2025-10-05 09:55:02.677687383 +0000 UTC m=+0.080650399 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:55:02 localhost podman[311222]: 2025-10-05 09:55:02.688786743 +0000 UTC m=+0.091749749 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:55:02 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:55:02 localhost ceph-mon[308154]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:55:02 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:02 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:02 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:02 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:03 localhost nova_compute[297021]: 2025-10-05 09:55:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:03 localhost nova_compute[297021]: 2025-10-05 09:55:03.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:55:03 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:03 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:03 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:03 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:03 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:55:03 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:03 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain"} : dispatch Oct 5 05:55:03 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain"}]': finished Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.450 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.450 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.450 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:55:04 localhost ceph-mon[308154]: mon.np0005471150@3(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:55:04 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1586063772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:55:04 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:04 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:04 localhost ceph-mon[308154]: Removed host np0005471147.localdomain Oct 5 05:55:04 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:04 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:04 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.908 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.968 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:55:04 localhost nova_compute[297021]: 2025-10-05 09:55:04.969 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:55:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:55:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5593 writes, 24K keys, 5593 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5593 writes, 794 syncs, 7.04 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 118 writes, 271 keys, 118 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 118 writes, 59 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.184 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.186 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11794MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.186 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.187 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.256 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.257 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.257 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.318 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:55:05 localhost ceph-mon[308154]: mon.np0005471150@3(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:55:05 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1374197684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.769 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.776 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.793 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.796 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:55:05 localhost nova_compute[297021]: 2025-10-05 09:55:05.797 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:55:05 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:05 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:05 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:05 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:05 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:06 localhost ceph-mon[308154]: mon.np0005471150@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:55:06 localhost podman[311291]: 2025-10-05 09:55:06.676890473 +0000 UTC m=+0.080322250 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:55:06 localhost podman[311291]: 2025-10-05 09:55:06.714974202 +0000 UTC m=+0.118405999 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 05:55:06 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:55:06 localhost podman[311290]: 2025-10-05 09:55:06.735530536 +0000 UTC m=+0.141821290 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:55:06 localhost podman[311290]: 2025-10-05 09:55:06.748830766 +0000 UTC m=+0.155121510 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:55:06 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:55:06 localhost nova_compute[297021]: 2025-10-05 09:55:06.794 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:06 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:06 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:06 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:06 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:06 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:07 localhost nova_compute[297021]: 2025-10-05 09:55:07.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:07 localhost nova_compute[297021]: 2025-10-05 09:55:07.416 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:07 localhost nova_compute[297021]: 2025-10-05 09:55:07.446 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:07 localhost nova_compute[297021]: 2025-10-05 09:55:07.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:07 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:07 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:07 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:07 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:07 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:55:09 localhost ceph-mon[308154]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:55:09 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:55:09 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:09 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:09 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:09 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.424 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.926 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.927 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.927 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:55:09 localhost nova_compute[297021]: 2025-10-05 09:55:09.928 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:55:10 localhost nova_compute[297021]: 2025-10-05 09:55:10.311 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:55:10 localhost nova_compute[297021]: 2025-10-05 09:55:10.328 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:55:10 localhost nova_compute[297021]: 2025-10-05 09:55:10.329 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:55:11 localhost ceph-mon[308154]: mon.np0005471150@3(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:12 localhost nova_compute[297021]: 2025-10-05 09:55:12.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:12 localhost nova_compute[297021]: 2025-10-05 09:55:12.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:12 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:12 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:12 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:12 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:55:13 localhost ceph-mon[308154]: Saving service mon spec with placement label:mon Oct 5 05:55:13 localhost podman[311364]: 2025-10-05 09:55:13.690592679 +0000 UTC m=+0.095456548 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:55:13 localhost podman[311364]: 2025-10-05 09:55:13.726871129 +0000 UTC m=+0.131735028 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:55:13 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:55:15 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b0171e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Oct 5 05:55:15 localhost ceph-mon[308154]: mon.np0005471150@3(peon) e11 my rank is now 2 (was 3) Oct 5 05:55:15 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:55:15 localhost ceph-mon[308154]: paxos.2).electionLogic(38) init, last seen epoch 38 Oct 5 05:55:15 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:17 localhost nova_compute[297021]: 2025-10-05 09:55:17.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:17 localhost nova_compute[297021]: 2025-10-05 09:55:17.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:55:17 localhost systemd[1]: tmp-crun.LAo5n5.mount: Deactivated successfully. Oct 5 05:55:17 localhost podman[311382]: 2025-10-05 09:55:17.688975287 +0000 UTC m=+0.093330321 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:55:17 localhost podman[311382]: 2025-10-05 09:55:17.735791261 +0000 UTC m=+0.140146275 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:55:17 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:55:18 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 handle_auth_request failed to assign global_id Oct 5 05:55:19 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 handle_auth_request failed to assign global_id Oct 5 05:55:19 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 handle_auth_request failed to assign global_id Oct 5 05:55:19 localhost ceph-mds[300279]: mds.beacon.mds.np0005471150.bsiqok missed beacon ack from the monitors Oct 5 05:55:20 localhost ceph-mon[308154]: paxos.2).electionLogic(39) init, last seen epoch 39, mid-election, bumping Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[308154]: Safe to remove mon.np0005471151: new quorum should be ['np0005471148', 'np0005471152', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471150']) Oct 5 05:55:20 localhost ceph-mon[308154]: Removing monitor np0005471151 from monmap... Oct 5 05:55:20 localhost ceph-mon[308154]: Removing daemon mon.np0005471151 from np0005471151.localdomain -- ports [] Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471148 calling monitor election Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471148 is new leader, mons np0005471148,np0005471152 in quorum (ranks 0,1) Oct 5 05:55:20 localhost ceph-mon[308154]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: stray daemon mgr.np0005471146.xqzesq on host np0005471146.localdomain not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:55:20 localhost ceph-mon[308154]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: stray host np0005471146.localdomain has 1 stray daemons: ['mgr.np0005471146.xqzesq'] Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471148 calling monitor election Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471150 in quorum (ranks 0,1,2) Oct 5 05:55:20 localhost ceph-mon[308154]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: stray daemon mgr.np0005471146.xqzesq on host np0005471146.localdomain not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[308154]: stray host np0005471146.localdomain has 1 stray daemons: ['mgr.np0005471146.xqzesq'] Oct 5 05:55:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:20 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 05:55:20 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/223983947' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 05:55:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 05:55:20 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/223983947' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 05:55:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:55:20.454 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:55:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:55:20.455 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:55:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:55:20.456 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:55:21 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:55:21 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:21 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost systemd[1]: tmp-crun.qiFDwv.mount: Deactivated successfully. Oct 5 05:55:21 localhost podman[311425]: 2025-10-05 09:55:21.336462091 +0000 UTC m=+0.096637680 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 05:55:21 localhost podman[311425]: 2025-10-05 09:55:21.371062295 +0000 UTC m=+0.131237904 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:55:21 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:55:21 localhost podman[248506]: time="2025-10-05T09:55:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:55:21 localhost podman[248506]: @ - - [05/Oct/2025:09:55:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:55:21 localhost podman[248506]: @ - - [05/Oct/2025:09:55:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18826 "" "Go-http-client/1.1" Oct 5 05:55:22 localhost openstack_network_exporter[250601]: ERROR 09:55:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:55:22 localhost openstack_network_exporter[250601]: ERROR 09:55:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:22 localhost openstack_network_exporter[250601]: ERROR 09:55:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:22 localhost openstack_network_exporter[250601]: ERROR 09:55:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:55:22 localhost openstack_network_exporter[250601]: Oct 5 05:55:22 localhost nova_compute[297021]: 2025-10-05 09:55:22.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:22 localhost openstack_network_exporter[250601]: ERROR 09:55:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:55:22 localhost openstack_network_exporter[250601]: Oct 5 05:55:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:55:22 localhost podman[311709]: 2025-10-05 09:55:22.483112611 +0000 UTC m=+0.083755392 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:55:22 localhost podman[311709]: 2025-10-05 09:55:22.498825656 +0000 UTC m=+0.099468387 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:55:22 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:55:22 localhost nova_compute[297021]: 2025-10-05 09:55:22.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:22 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:23 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:55:23 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:23 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:55:23 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:23 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:23 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:24 localhost ceph-mon[308154]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:55:24 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:55:24 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:24 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:24 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:25 localhost podman[311836]: Oct 5 05:55:25 localhost podman[311836]: 2025-10-05 09:55:25.296080374 +0000 UTC m=+0.078329976 container create dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_lumiere, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main) Oct 5 05:55:25 localhost systemd[1]: Started libpod-conmon-dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae.scope. Oct 5 05:55:25 localhost podman[311836]: 2025-10-05 09:55:25.264303865 +0000 UTC m=+0.046553507 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:25 localhost systemd[1]: Started libcrun container. Oct 5 05:55:25 localhost podman[311836]: 2025-10-05 09:55:25.398975241 +0000 UTC m=+0.181224843 container init dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_lumiere, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.buildah.version=1.33.12, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.expose-services=) Oct 5 05:55:25 localhost podman[311836]: 2025-10-05 09:55:25.411804918 +0000 UTC m=+0.194054520 container start dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_lumiere, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Oct 5 05:55:25 localhost podman[311836]: 2025-10-05 09:55:25.412088985 +0000 UTC m=+0.194338617 container attach dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_lumiere, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., release=553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:25 localhost angry_lumiere[311851]: 167 167 Oct 5 05:55:25 localhost systemd[1]: libpod-dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae.scope: Deactivated successfully. Oct 5 05:55:25 localhost podman[311836]: 2025-10-05 09:55:25.414659235 +0000 UTC m=+0.196908847 container died dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_lumiere, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, release=553, vcs-type=git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:25 localhost podman[311856]: 2025-10-05 09:55:25.512975239 +0000 UTC m=+0.085502630 container remove dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_lumiere, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True) Oct 5 05:55:25 localhost systemd[1]: libpod-conmon-dcd98a6870dd232eb223890697096723dd1d498a5839c4e709df74e88cb759ae.scope: Deactivated successfully. Oct 5 05:55:25 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:55:25 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:55:25 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:25 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:25 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:25 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:55:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:26 localhost podman[311926]: Oct 5 05:55:26 localhost podman[311926]: 2025-10-05 09:55:26.278090288 +0000 UTC m=+0.080420913 container create 47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, vcs-type=git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, release=553, version=7, vendor=Red Hat, Inc.) Oct 5 05:55:26 localhost systemd[1]: var-lib-containers-storage-overlay-bc87c8f814a839484c5b5b7a06e7b100a91c6618dab45687dc0a95d4bbfb309b-merged.mount: Deactivated successfully. Oct 5 05:55:26 localhost systemd[1]: Started libpod-conmon-47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9.scope. Oct 5 05:55:26 localhost systemd[1]: Started libcrun container. Oct 5 05:55:26 localhost podman[311926]: 2025-10-05 09:55:26.342949369 +0000 UTC m=+0.145279994 container init 47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , release=553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, distribution-scope=public, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Oct 5 05:55:26 localhost podman[311926]: 2025-10-05 09:55:26.243903725 +0000 UTC m=+0.046234390 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:26 localhost podman[311926]: 2025-10-05 09:55:26.353501974 +0000 UTC m=+0.155832599 container start 47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55) Oct 5 05:55:26 localhost podman[311926]: 2025-10-05 09:55:26.353782142 +0000 UTC m=+0.156112807 container attach 47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:55:26 localhost awesome_satoshi[311941]: 167 167 Oct 5 05:55:26 localhost systemd[1]: libpod-47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9.scope: Deactivated successfully. Oct 5 05:55:26 localhost podman[311926]: 2025-10-05 09:55:26.357907384 +0000 UTC m=+0.160238009 container died 47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, GIT_CLEAN=True, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:26 localhost podman[311946]: 2025-10-05 09:55:26.456476975 +0000 UTC m=+0.090763653 container remove 47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_satoshi, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 5 05:55:26 localhost systemd[1]: libpod-conmon-47f914375f89fdafb85f4b03cab2891546dba0c8003d389337cc3e67b4609ad9.scope: Deactivated successfully. Oct 5 05:55:26 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:55:26 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:26 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:26 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:26 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:27 localhost nova_compute[297021]: 2025-10-05 09:55:27.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:27 localhost podman[312022]: Oct 5 05:55:27 localhost podman[312022]: 2025-10-05 09:55:27.252434276 +0000 UTC m=+0.077448163 container create c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_visvesvaraya, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Oct 5 05:55:27 localhost systemd[1]: Started libpod-conmon-c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea.scope. Oct 5 05:55:27 localhost systemd[1]: Started libcrun container. Oct 5 05:55:27 localhost systemd[1]: var-lib-containers-storage-overlay-b77d32ef0c799c3998afc8b3e0902110db5a87262c8ba9ea711c70fe70dfd53e-merged.mount: Deactivated successfully. Oct 5 05:55:27 localhost podman[312022]: 2025-10-05 09:55:27.314257415 +0000 UTC m=+0.139271312 container init c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_visvesvaraya, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:55:27 localhost podman[312022]: 2025-10-05 09:55:27.220441632 +0000 UTC m=+0.045455529 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:27 localhost podman[312022]: 2025-10-05 09:55:27.324996475 +0000 UTC m=+0.150010392 container start c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_visvesvaraya, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:55:27 localhost podman[312022]: 2025-10-05 09:55:27.325316414 +0000 UTC m=+0.150330361 container attach c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_visvesvaraya, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:55:27 localhost awesome_visvesvaraya[312038]: 167 167 Oct 5 05:55:27 localhost podman[312022]: 2025-10-05 09:55:27.327348469 +0000 UTC m=+0.152362336 container died c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_visvesvaraya, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:27 localhost systemd[1]: libpod-c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea.scope: Deactivated successfully. Oct 5 05:55:27 localhost podman[312043]: 2025-10-05 09:55:27.432869587 +0000 UTC m=+0.097041830 container remove c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_visvesvaraya, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, release=553) Oct 5 05:55:27 localhost systemd[1]: libpod-conmon-c8578d4d8c7c0e4fc49899777c2b3d87215e5b2ef21af198af564e279a73c9ea.scope: Deactivated successfully. Oct 5 05:55:27 localhost nova_compute[297021]: 2025-10-05 09:55:27.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:27 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:55:27 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:27 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:27 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:27 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:28 localhost podman[312118]: Oct 5 05:55:28 localhost podman[312118]: 2025-10-05 09:55:28.26293104 +0000 UTC m=+0.073898137 container create 228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_johnson, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph) Oct 5 05:55:28 localhost systemd[1]: var-lib-containers-storage-overlay-98918e133c2286f0a05b05929abf4e9a3fa53cb5cc7f1999d257a2629f481df7-merged.mount: Deactivated successfully. Oct 5 05:55:28 localhost systemd[1]: Started libpod-conmon-228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6.scope. Oct 5 05:55:28 localhost podman[312118]: 2025-10-05 09:55:28.23701667 +0000 UTC m=+0.047983817 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:28 localhost systemd[1]: Started libcrun container. Oct 5 05:55:28 localhost podman[312118]: 2025-10-05 09:55:28.352133238 +0000 UTC m=+0.163100345 container init 228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_johnson, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Oct 5 05:55:28 localhost podman[312118]: 2025-10-05 09:55:28.361357727 +0000 UTC m=+0.172324824 container start 228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_johnson, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main) Oct 5 05:55:28 localhost podman[312118]: 2025-10-05 09:55:28.361613304 +0000 UTC m=+0.172580441 container attach 228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_johnson, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, ceph=True, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_BRANCH=main) Oct 5 05:55:28 localhost epic_johnson[312133]: 167 167 Oct 5 05:55:28 localhost podman[312118]: 2025-10-05 09:55:28.365301634 +0000 UTC m=+0.176268761 container died 228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_johnson, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:28 localhost systemd[1]: libpod-228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6.scope: Deactivated successfully. Oct 5 05:55:28 localhost podman[312138]: 2025-10-05 09:55:28.461777549 +0000 UTC m=+0.086163528 container remove 228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_johnson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Oct 5 05:55:28 localhost systemd[1]: libpod-conmon-228cc0d8a43c3183307ad86b314455ef2fb802695eca7cf8afb636549f5518d6.scope: Deactivated successfully. Oct 5 05:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:55:28 localhost podman[312190]: 2025-10-05 09:55:28.836203908 +0000 UTC m=+0.086833035 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:55:28 localhost podman[312190]: 2025-10-05 09:55:28.849787975 +0000 UTC m=+0.100417102 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:55:28 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:55:28 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:55:28 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:55:28 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:28 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:55:28 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:28 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:28 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:29 localhost podman[312229]: Oct 5 05:55:29 localhost podman[312229]: 2025-10-05 09:55:29.184279907 +0000 UTC m=+0.078232273 container create 36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_heyrovsky, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 05:55:29 localhost systemd[1]: Started libpod-conmon-36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b.scope. Oct 5 05:55:29 localhost systemd[1]: Started libcrun container. Oct 5 05:55:29 localhost podman[312229]: 2025-10-05 09:55:29.250189876 +0000 UTC m=+0.144142252 container init 36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_heyrovsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, release=553, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Oct 5 05:55:29 localhost podman[312229]: 2025-10-05 09:55:29.150511105 +0000 UTC m=+0.044463511 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:29 localhost podman[312229]: 2025-10-05 09:55:29.259635181 +0000 UTC m=+0.153587537 container start 36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_heyrovsky, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:29 localhost podman[312229]: 2025-10-05 09:55:29.259899599 +0000 UTC m=+0.153852005 container attach 36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_heyrovsky, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, version=7) Oct 5 05:55:29 localhost gallant_heyrovsky[312244]: 167 167 Oct 5 05:55:29 localhost systemd[1]: libpod-36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b.scope: Deactivated successfully. Oct 5 05:55:29 localhost podman[312229]: 2025-10-05 09:55:29.265312184 +0000 UTC m=+0.159264610 container died 36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_heyrovsky, RELEASE=main, release=553, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Oct 5 05:55:29 localhost systemd[1]: var-lib-containers-storage-overlay-337b5a2e85b1828f5d4f5bced7be2ea848598c3840f34238db6059330283103d-merged.mount: Deactivated successfully. Oct 5 05:55:29 localhost systemd[1]: var-lib-containers-storage-overlay-fb22a6f9d0fc1d1fe43a6cd1e8265707adbcac90da4fcbb953ca04090be82a93-merged.mount: Deactivated successfully. Oct 5 05:55:29 localhost podman[312250]: 2025-10-05 09:55:29.367218576 +0000 UTC m=+0.090646489 container remove 36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_heyrovsky, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc.) Oct 5 05:55:29 localhost systemd[1]: libpod-conmon-36e68686b8daec67c1f7fe1925eba8452cb446d435a7158e47fdb56fadac5b7b.scope: Deactivated successfully. Oct 5 05:55:29 localhost ceph-mon[308154]: Deploying daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:29 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:55:29 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:55:29 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:29 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:29 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:31 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:55:31 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:55:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:32 localhost nova_compute[297021]: 2025-10-05 09:55:32.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:55:32 localhost nova_compute[297021]: 2025-10-05 09:55:32.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:33 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:55:33 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:55:33 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:33 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:33 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:55:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:33 localhost podman[312266]: 2025-10-05 09:55:33.695659008 +0000 UTC m=+0.098406598 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:55:33 localhost podman[312266]: 2025-10-05 09:55:33.712835992 +0000 UTC m=+0.115583552 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:55:33 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:55:34 localhost ceph-mon[308154]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:55:34 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:34 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:34 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:34 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:55:34 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:34 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:35 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:35 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:35 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:35 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:35 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:35 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:36 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:36 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:36 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:36 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:36 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:36 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:55:37 localhost nova_compute[297021]: 2025-10-05 09:55:37.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:37 localhost nova_compute[297021]: 2025-10-05 09:55:37.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:55:37 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:37 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:37 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:37 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:37 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:55:37 localhost systemd[1]: tmp-crun.EADSgg.mount: Deactivated successfully. Oct 5 05:55:37 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:37 localhost podman[312289]: 2025-10-05 09:55:37.690047157 +0000 UTC m=+0.085753816 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 05:55:37 localhost podman[312289]: 2025-10-05 09:55:37.705785121 +0000 UTC m=+0.101491740 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, managed_by=edpm_ansible) Oct 5 05:55:37 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:55:37 localhost podman[312290]: 2025-10-05 09:55:37.804893148 +0000 UTC m=+0.199111106 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:55:37 localhost podman[312290]: 2025-10-05 09:55:37.820755825 +0000 UTC m=+0.214973763 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:55:37 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:55:38 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:38 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:38 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:38 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:38 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:39 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:39 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:39 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:39 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:39 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:39 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:39 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:40 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:40 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:40 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:40 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:41 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:41 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:42 localhost nova_compute[297021]: 2025-10-05 09:55:42.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:42 localhost nova_compute[297021]: 2025-10-05 09:55:42.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:42 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:42 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:42 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:42 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:42 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Oct 5 05:55:42 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3763511495' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Oct 5 05:55:43 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:43 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:55:44 localhost podman[312410]: 2025-10-05 09:55:44.679842874 +0000 UTC m=+0.083144236 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:55:44 localhost podman[312410]: 2025-10-05 09:55:44.715928939 +0000 UTC m=+0.119230271 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:55:44 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:55:45 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:46 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:46 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost nova_compute[297021]: 2025-10-05 09:55:47.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:47 localhost ceph-mon[308154]: Reconfig service osd.default_drive_group Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost nova_compute[297021]: 2025-10-05 09:55:47.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:47 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:47 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:55:48 localhost podman[312499]: 2025-10-05 09:55:48.145508488 +0000 UTC m=+0.070930506 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 05:55:48 localhost podman[312512]: Oct 5 05:55:48 localhost podman[312512]: 2025-10-05 09:55:48.194346457 +0000 UTC m=+0.091538392 container create 9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_noyce, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True) Oct 5 05:55:48 localhost podman[312499]: 2025-10-05 09:55:48.224864871 +0000 UTC m=+0.150286879 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller) Oct 5 05:55:48 localhost systemd[1]: Started libpod-conmon-9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc.scope. Oct 5 05:55:48 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:55:48 localhost systemd[1]: Started libcrun container. Oct 5 05:55:48 localhost podman[312512]: 2025-10-05 09:55:48.158568891 +0000 UTC m=+0.055760836 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:48 localhost podman[312512]: 2025-10-05 09:55:48.272079356 +0000 UTC m=+0.169271291 container init 9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_noyce, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.33.12, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc.) Oct 5 05:55:48 localhost podman[312512]: 2025-10-05 09:55:48.282284962 +0000 UTC m=+0.179476907 container start 9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_noyce, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, name=rhceph, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:48 localhost podman[312512]: 2025-10-05 09:55:48.28258411 +0000 UTC m=+0.179776045 container attach 9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_noyce, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=) Oct 5 05:55:48 localhost hungry_noyce[312540]: 167 167 Oct 5 05:55:48 localhost systemd[1]: libpod-9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc.scope: Deactivated successfully. Oct 5 05:55:48 localhost podman[312512]: 2025-10-05 09:55:48.288318984 +0000 UTC m=+0.185510929 container died 9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_noyce, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container) Oct 5 05:55:48 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:55:48 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:48 localhost podman[312545]: 2025-10-05 09:55:48.380875913 +0000 UTC m=+0.082344154 container remove 9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_noyce, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Oct 5 05:55:48 localhost systemd[1]: libpod-conmon-9db0f558254fca8d94c5c873d46eaa5f11d49e7ebe8e28a5d9079fd6c8ce64dc.scope: Deactivated successfully. Oct 5 05:55:49 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e84 e84: 6 total, 6 up, 6 in Oct 5 05:55:49 localhost systemd-logind[760]: Session 71 logged out. Waiting for processes to exit. Oct 5 05:55:49 localhost systemd[1]: var-lib-containers-storage-overlay-f89e7654256f9ee7538c36485fc8dcb0e640d5038e0f167708d877d58870664f-merged.mount: Deactivated successfully. Oct 5 05:55:49 localhost podman[312621]: Oct 5 05:55:49 localhost podman[312621]: 2025-10-05 09:55:49.279776354 +0000 UTC m=+0.092422296 container create 532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_grothendieck, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main) Oct 5 05:55:49 localhost systemd[1]: Started libpod-conmon-532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d.scope. Oct 5 05:55:49 localhost podman[312621]: 2025-10-05 09:55:49.244221734 +0000 UTC m=+0.056867756 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:49 localhost systemd[1]: Started libcrun container. Oct 5 05:55:49 localhost podman[312621]: 2025-10-05 09:55:49.373875455 +0000 UTC m=+0.186521397 container init 532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_grothendieck, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=) Oct 5 05:55:49 localhost podman[312621]: 2025-10-05 09:55:49.385366426 +0000 UTC m=+0.198012368 container start 532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_grothendieck, io.buildah.version=1.33.12, name=rhceph, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7) Oct 5 05:55:49 localhost podman[312621]: 2025-10-05 09:55:49.385664604 +0000 UTC m=+0.198310536 container attach 532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_grothendieck, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:55:49 localhost dreamy_grothendieck[312636]: 167 167 Oct 5 05:55:49 localhost systemd[1]: libpod-532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d.scope: Deactivated successfully. Oct 5 05:55:49 localhost podman[312621]: 2025-10-05 09:55:49.389283201 +0000 UTC m=+0.201929163 container died 532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_grothendieck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, release=553) Oct 5 05:55:49 localhost sshd[312640]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:55:49 localhost podman[312642]: 2025-10-05 09:55:49.498948122 +0000 UTC m=+0.096651150 container remove 532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_grothendieck, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, name=rhceph) Oct 5 05:55:49 localhost systemd[1]: libpod-conmon-532c509e05899ee1aa82c4d1cbf4905ae2951810ace671cb891d6423c106221d.scope: Deactivated successfully. Oct 5 05:55:49 localhost systemd-logind[760]: New session 72 of user ceph-admin. Oct 5 05:55:49 localhost systemd[1]: Started Session 72 of User ceph-admin. Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/3757018629' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: Activating manager daemon np0005471148.fayrer Oct 5 05:55:49 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/3757018629' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:55:49 localhost ceph-mon[308154]: Manager daemon np0005471148.fayrer is now available Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"}]': finished Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"}]': finished Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/mirror_snapshot_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/mirror_snapshot_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/trash_purge_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/trash_purge_schedule"} : dispatch Oct 5 05:55:49 localhost systemd[1]: session-71.scope: Deactivated successfully. Oct 5 05:55:49 localhost systemd[1]: session-71.scope: Consumed 23.508s CPU time. Oct 5 05:55:49 localhost systemd-logind[760]: Removed session 71. Oct 5 05:55:49 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:49 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:50 localhost systemd[1]: var-lib-containers-storage-overlay-5e27e10802263e703df87ed7b60c8623525df44979a22a18099ff6d316cb48ee-merged.mount: Deactivated successfully. Oct 5 05:55:50 localhost podman[312778]: 2025-10-05 09:55:50.517527104 +0000 UTC m=+0.082900699 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:55:50 localhost ceph-mon[308154]: removing stray HostCache host record np0005471147.localdomain.devices.0 Oct 5 05:55:50 localhost podman[312778]: 2025-10-05 09:55:50.604466792 +0000 UTC m=+0.169840347 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.buildah.version=1.33.12, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:51 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:55:51 localhost podman[248506]: time="2025-10-05T09:55:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:55:51 localhost podman[248506]: @ - - [05/Oct/2025:09:55:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:55:51 localhost podman[248506]: @ - - [05/Oct/2025:09:55:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18833 "" "Go-http-client/1.1" Oct 5 05:55:51 localhost podman[312936]: 2025-10-05 09:55:51.557222717 +0000 UTC m=+0.095003036 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:55:51 localhost podman[312936]: 2025-10-05 09:55:51.569812116 +0000 UTC m=+0.107592405 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:55:51 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:55:51 localhost ceph-mon[308154]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Oct 5 05:55:51 localhost ceph-mon[308154]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Oct 5 05:55:51 localhost ceph-mon[308154]: Cluster is now healthy Oct 5 05:55:51 localhost ceph-mon[308154]: [05/Oct/2025:09:55:51] ENGINE Bus STARTING Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: [05/Oct/2025:09:55:51] ENGINE Serving on https://172.18.0.105:7150 Oct 5 05:55:51 localhost ceph-mon[308154]: [05/Oct/2025:09:55:51] ENGINE Client ('172.18.0.105', 40356) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: [05/Oct/2025:09:55:51] ENGINE Serving on http://172.18.0.105:8765 Oct 5 05:55:51 localhost ceph-mon[308154]: [05/Oct/2025:09:55:51] ENGINE Bus STARTED Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:51 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:52 localhost openstack_network_exporter[250601]: ERROR 09:55:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:55:52 localhost openstack_network_exporter[250601]: ERROR 09:55:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:52 localhost openstack_network_exporter[250601]: ERROR 09:55:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:55:52 localhost openstack_network_exporter[250601]: Oct 5 05:55:52 localhost openstack_network_exporter[250601]: ERROR 09:55:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:52 localhost openstack_network_exporter[250601]: ERROR 09:55:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:55:52 localhost openstack_network_exporter[250601]: Oct 5 05:55:52 localhost nova_compute[297021]: 2025-10-05 09:55:52.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:55:52 localhost nova_compute[297021]: 2025-10-05 09:55:52.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:52 localhost podman[313023]: 2025-10-05 09:55:52.670134466 +0000 UTC m=+0.070018262 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, distribution-scope=public, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41) Oct 5 05:55:52 localhost podman[313023]: 2025-10-05 09:55:52.688931773 +0000 UTC m=+0.088815579 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:55:52 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:55:53 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:55:53 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:55:53 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:55:53 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:55:53 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:53 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:53 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:54 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:55 localhost sshd[313576]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:55:55 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:55 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:56 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:56 localhost podman[313774]: Oct 5 05:55:56 localhost podman[313774]: 2025-10-05 09:55:56.789528292 +0000 UTC m=+0.085868890 container create 7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_wilson, vcs-type=git, distribution-scope=public, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=) Oct 5 05:55:56 localhost systemd[1]: Started libpod-conmon-7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb.scope. Oct 5 05:55:56 localhost systemd[1]: Started libcrun container. Oct 5 05:55:56 localhost podman[313774]: 2025-10-05 09:55:56.758267068 +0000 UTC m=+0.054607676 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:56 localhost podman[313774]: 2025-10-05 09:55:56.864590128 +0000 UTC m=+0.160930716 container init 7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_wilson, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph) Oct 5 05:55:56 localhost podman[313774]: 2025-10-05 09:55:56.8757678 +0000 UTC m=+0.172108388 container start 7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_wilson, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_CLEAN=True) Oct 5 05:55:56 localhost podman[313774]: 2025-10-05 09:55:56.876080559 +0000 UTC m=+0.172421167 container attach 7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_wilson, name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, vcs-type=git, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=) Oct 5 05:55:56 localhost trusting_wilson[313790]: 167 167 Oct 5 05:55:56 localhost systemd[1]: libpod-7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb.scope: Deactivated successfully. Oct 5 05:55:56 localhost podman[313774]: 2025-10-05 09:55:56.882870012 +0000 UTC m=+0.179210650 container died 7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_wilson, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:55:56 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:56 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:56 localhost podman[313795]: 2025-10-05 09:55:56.993053396 +0000 UTC m=+0.099024304 container remove 7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_wilson, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12) Oct 5 05:55:56 localhost systemd[1]: libpod-conmon-7b1511ae53b17df8ce3a0632ecb408b12b1f1f6ba658fcdd60475805e80024cb.scope: Deactivated successfully. Oct 5 05:55:57 localhost nova_compute[297021]: 2025-10-05 09:55:57.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:57 localhost nova_compute[297021]: 2025-10-05 09:55:57.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:55:57 localhost systemd[1]: var-lib-containers-storage-overlay-1f1e2ca58a774523aaff3e1a480f0421ed4255160dc3958464b99f145853f77d-merged.mount: Deactivated successfully. Oct 5 05:55:57 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:58 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:55:58 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:55:59 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:55:59 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:59 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:55:59 localhost podman[313819]: 2025-10-05 09:55:59.699892274 +0000 UTC m=+0.096124307 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:55:59 localhost podman[313819]: 2025-10-05 09:55:59.715816373 +0000 UTC m=+0.112048346 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:55:59 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:55:59 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:59 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:00 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:56:00 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:56:01 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:01 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:56:01 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:56:01 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:01 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:02 localhost nova_compute[297021]: 2025-10-05 09:56:02.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:02 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:02 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:56:02 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/78766481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:56:02 localhost nova_compute[297021]: 2025-10-05 09:56:02.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:03 localhost nova_compute[297021]: 2025-10-05 09:56:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:03 localhost nova_compute[297021]: 2025-10-05 09:56:03.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:56:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:56:03 localhost podman[313881]: 2025-10-05 09:56:03.918179828 +0000 UTC m=+0.088287164 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:56:03 localhost podman[313881]: 2025-10-05 09:56:03.929847854 +0000 UTC m=+0.099955220 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:56:03 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:56:03 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.499 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.500 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.500 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.501 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.501 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:56:04 localhost ceph-mon[308154]: Saving service mon spec with placement label:mon Oct 5 05:56:04 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:04 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:04 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:56:04 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1498837551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:56:04 localhost nova_compute[297021]: 2025-10-05 09:56:04.977 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.091 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.092 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.326 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.328 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11751MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.329 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.329 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.506 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.507 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.507 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:56:05 localhost podman[313980]: Oct 5 05:56:05 localhost podman[313980]: 2025-10-05 09:56:05.532870666 +0000 UTC m=+0.073714222 container create 8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_franklin, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, release=553, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, vcs-type=git) Oct 5 05:56:05 localhost nova_compute[297021]: 2025-10-05 09:56:05.536 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:56:05 localhost systemd[1]: Started libpod-conmon-8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8.scope. Oct 5 05:56:05 localhost ceph-mon[308154]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:56:05 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:56:05 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:05 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:05 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:05 localhost podman[313980]: 2025-10-05 09:56:05.49930778 +0000 UTC m=+0.040151356 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:05 localhost systemd[1]: Started libcrun container. Oct 5 05:56:05 localhost podman[313980]: 2025-10-05 09:56:05.633152764 +0000 UTC m=+0.173996320 container init 8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_franklin, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, version=7) Oct 5 05:56:05 localhost podman[313980]: 2025-10-05 09:56:05.646150505 +0000 UTC m=+0.186994061 container start 8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_franklin, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, release=553, io.buildah.version=1.33.12, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:05 localhost podman[313980]: 2025-10-05 09:56:05.646439362 +0000 UTC m=+0.187282928 container attach 8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_franklin, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.expose-services=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, version=7, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:56:05 localhost bold_franklin[313996]: 167 167 Oct 5 05:56:05 localhost systemd[1]: libpod-8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8.scope: Deactivated successfully. Oct 5 05:56:05 localhost podman[313980]: 2025-10-05 09:56:05.651652773 +0000 UTC m=+0.192496309 container died 8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_franklin, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:56:05 localhost podman[314002]: 2025-10-05 09:56:05.786014551 +0000 UTC m=+0.118839639 container remove 8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_franklin, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, release=553, version=7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True) Oct 5 05:56:05 localhost systemd[1]: libpod-conmon-8b92bbb0bda495deaf1083c2d769d32aa9d3039c01b7fa52c3e5c10d809d72b8.scope: Deactivated successfully. Oct 5 05:56:05 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:05 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:06 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:56:06 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4030041380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:56:06 localhost nova_compute[297021]: 2025-10-05 09:56:06.058 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:56:06 localhost nova_compute[297021]: 2025-10-05 09:56:06.068 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:56:06 localhost nova_compute[297021]: 2025-10-05 09:56:06.090 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:56:06 localhost nova_compute[297021]: 2025-10-05 09:56:06.093 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:56:06 localhost nova_compute[297021]: 2025-10-05 09:56:06.093 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:56:06 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:06 localhost systemd[1]: var-lib-containers-storage-overlay-47fffe63a935ad7a83ba3b0b8bd892eb8be85368dd29e43c5d755c7b565fcc3d-merged.mount: Deactivated successfully. Oct 5 05:56:06 localhost ceph-mon[308154]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:56:06 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:56:06 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:06 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:06 localhost ceph-mon[308154]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.612069) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166612120, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2804, "num_deletes": 256, "total_data_size": 8504867, "memory_usage": 8746208, "flush_reason": "Manual Compaction"} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166648774, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 5067169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13719, "largest_seqno": 16518, "table_properties": {"data_size": 5055540, "index_size": 7238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 29742, "raw_average_key_size": 22, "raw_value_size": 5030205, "raw_average_value_size": 3822, "num_data_blocks": 314, "num_entries": 1316, "num_filter_entries": 1316, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658102, "oldest_key_time": 1759658102, "file_creation_time": 1759658166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 36792 microseconds, and 12021 cpu microseconds. Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.648850) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 5067169 bytes OK Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.648885) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.651191) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.651216) EVENT_LOG_v1 {"time_micros": 1759658166651207, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.651239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 8491331, prev total WAL file size 8491331, number of live WAL files 2. Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.653313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(4948KB)], [21(13MB)] Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166653424, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19439432, "oldest_snapshot_seqno": -1} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 11270 keys, 17537388 bytes, temperature: kUnknown Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166774830, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 17537388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17469516, "index_size": 38587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299712, "raw_average_key_size": 26, "raw_value_size": 17274022, "raw_average_value_size": 1532, "num_data_blocks": 1490, "num_entries": 11270, "num_filter_entries": 11270, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.775259) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 17537388 bytes Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.777207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.9 rd, 144.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.8, 13.7 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(7.3) write-amplify(3.5) OK, records in: 11819, records dropped: 549 output_compression: NoCompression Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.777238) EVENT_LOG_v1 {"time_micros": 1759658166777224, "job": 10, "event": "compaction_finished", "compaction_time_micros": 121563, "compaction_time_cpu_micros": 44004, "output_level": 6, "num_output_files": 1, "total_output_size": 17537388, "num_input_records": 11819, "num_output_records": 11270, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166777996, "job": 10, "event": "table_file_deletion", "file_number": 23} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166780140, "job": 10, "event": "table_file_deletion", "file_number": 21} Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.652972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.780222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.780230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.780233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.780236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:06.780239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost sshd[314038]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:56:07 localhost nova_compute[297021]: 2025-10-05 09:56:07.090 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:07 localhost nova_compute[297021]: 2025-10-05 09:56:07.090 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:07 localhost nova_compute[297021]: 2025-10-05 09:56:07.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:07 localhost nova_compute[297021]: 2025-10-05 09:56:07.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:07 localhost nova_compute[297021]: 2025-10-05 09:56:07.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:07 localhost ceph-mon[308154]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:56:07 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:56:07 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:07 localhost ceph-mon[308154]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:07 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:56:08 localhost podman[314041]: 2025-10-05 09:56:08.703246137 +0000 UTC m=+0.088046068 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true) Oct 5 05:56:08 localhost podman[314041]: 2025-10-05 09:56:08.712167269 +0000 UTC m=+0.096967170 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:56:08 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:56:08 localhost podman[314042]: 2025-10-05 09:56:08.766459975 +0000 UTC m=+0.149981450 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:56:08 localhost podman[314042]: 2025-10-05 09:56:08.776047993 +0000 UTC m=+0.159569468 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:56:08 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:56:09 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:11 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:56:11 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x56322b017080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Oct 5 05:56:11 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:56:11 localhost ceph-mon[308154]: paxos.2).electionLogic(44) init, last seen epoch 44 Oct 5 05:56:11 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:11 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.956 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.957 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.957 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:56:11 localhost nova_compute[297021]: 2025-10-05 09:56:11.958 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:56:12 localhost nova_compute[297021]: 2025-10-05 09:56:12.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:12 localhost nova_compute[297021]: 2025-10-05 09:56:12.470 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:56:12 localhost nova_compute[297021]: 2025-10-05 09:56:12.494 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:56:12 localhost nova_compute[297021]: 2025-10-05 09:56:12.494 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:56:12 localhost nova_compute[297021]: 2025-10-05 09:56:12.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:56:15 localhost podman[314079]: 2025-10-05 09:56:15.682171333 +0000 UTC m=+0.092449977 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:56:15 localhost podman[314079]: 2025-10-05 09:56:15.69170574 +0000 UTC m=+0.101984344 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:56:15 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:56:15 localhost ceph-mds[300279]: mds.beacon.mds.np0005471150.bsiqok missed beacon ack from the monitors Oct 5 05:56:16 localhost sshd[314099]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:56:16 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:56:16 localhost ceph-mon[308154]: paxos.2).electionLogic(46) init, last seen epoch 46 Oct 5 05:56:16 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e12 handle_timecheck drop unexpected msg Oct 5 05:56:16 localhost ceph-mon[308154]: mon.np0005471150@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:17 localhost nova_compute[297021]: 2025-10-05 09:56:17.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:17 localhost nova_compute[297021]: 2025-10-05 09:56:17.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:56:17 localhost ceph-mon[308154]: Health check failed: 1/4 mons down, quorum np0005471148,np0005471152,np0005471150 (MON_DOWN) Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:56:17 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471148 calling monitor election Oct 5 05:56:17 localhost ceph-mon[308154]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471150,np0005471151 in quorum (ranks 0,1,2,3) Oct 5 05:56:17 localhost ceph-mon[308154]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005471148,np0005471152,np0005471150) Oct 5 05:56:17 localhost ceph-mon[308154]: Cluster is now healthy Oct 5 05:56:17 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e85 e85: 6 total, 6 up, 6 in Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr handle_mgr_map Activating! Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr handle_mgr_map I am now activating Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471148"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471148"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471151.uyxcpj"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mds metadata", "who": "mds.np0005471151.uyxcpj"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471150.bsiqok"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mds metadata", "who": "mds.np0005471150.bsiqok"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471152.pozuqw"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mds metadata", "who": "mds.np0005471152.pozuqw"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471150.zwqxye", "id": "np0005471150.zwqxye"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr metadata", "who": "np0005471150.zwqxye", "id": "np0005471150.zwqxye"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471151.jecxod", "id": "np0005471151.jecxod"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr metadata", "who": "np0005471151.jecxod", "id": "np0005471151.jecxod"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471152.kbhlus", "id": "np0005471152.kbhlus"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr metadata", "who": "np0005471152.kbhlus", "id": "np0005471152.kbhlus"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata", "id": 0} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata", "id": 1} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata", "id": 2} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata", "id": 3} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata", "id": 4} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata", "id": 5} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon).mds e16 all = 0 Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon).mds e16 all = 0 Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon).mds e16 all = 0 Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mds metadata"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mds metadata"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon).mds e16 all = 1 Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd metadata"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd metadata"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mon metadata"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata"} : dispatch Oct 5 05:56:18 localhost ceph-mgr[301561]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: balancer Oct 5 05:56:18 localhost ceph-mgr[301561]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: [balancer INFO root] Starting Oct 5 05:56:18 localhost ceph-mgr[301561]: [balancer INFO root] Optimize plan auto_2025-10-05_09:56:18 Oct 5 05:56:18 localhost ceph-mgr[301561]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 05:56:18 localhost ceph-mgr[301561]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Oct 5 05:56:18 localhost systemd[1]: session-72.scope: Deactivated successfully. Oct 5 05:56:18 localhost systemd[1]: session-72.scope: Consumed 7.883s CPU time. Oct 5 05:56:18 localhost systemd-logind[760]: Session 72 logged out. Waiting for processes to exit. Oct 5 05:56:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: cephadm Oct 5 05:56:18 localhost ceph-mgr[301561]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: crash Oct 5 05:56:18 localhost ceph-mgr[301561]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: devicehealth Oct 5 05:56:18 localhost ceph-mgr[301561]: [devicehealth INFO root] Starting Oct 5 05:56:18 localhost ceph-mgr[301561]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: iostat Oct 5 05:56:18 localhost ceph-mgr[301561]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: nfs Oct 5 05:56:18 localhost systemd-logind[760]: Removed session 72. Oct 5 05:56:18 localhost ceph-mgr[301561]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: orchestrator Oct 5 05:56:18 localhost ceph-mgr[301561]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: pg_autoscaler Oct 5 05:56:18 localhost ceph-mgr[301561]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: progress Oct 5 05:56:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: [progress INFO root] Loading... Oct 5 05:56:18 localhost ceph-mgr[301561]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Oct 5 05:56:18 localhost ceph-mgr[301561]: [progress INFO root] Loaded OSDMap, ready. Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] recovery thread starting Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] starting setup Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: rbd_support Oct 5 05:56:18 localhost ceph-mgr[301561]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: restful Oct 5 05:56:18 localhost ceph-mgr[301561]: [restful INFO root] server_addr: :: server_port: 8003 Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/mirror_snapshot_schedule"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/mirror_snapshot_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mgr[301561]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: status Oct 5 05:56:18 localhost ceph-mgr[301561]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: telemetry Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 05:56:18 localhost ceph-mgr[301561]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [restful WARNING root] server not running: no certificate configured Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] PerfHandler: starting Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_task_task: vms, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_task_task: volumes, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 05:56:18 localhost ceph-mgr[301561]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 05:56:18 localhost ceph-mgr[301561]: mgr load Constructed class from module: volumes Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_task_task: images, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_task_task: backups, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.637+0000 7fbb9eec4640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.637+0000 7fbb9eec4640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.637+0000 7fbb9eec4640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.637+0000 7fbb9eec4640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.637+0000 7fbb9eec4640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] TaskHandler: starting Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/trash_purge_schedule"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/trash_purge_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.643+0000 7fbba36cd640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.643+0000 7fbba36cd640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.643+0000 7fbba36cd640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.643+0000 7fbba36cd640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-mgr[301561]: client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:56:18.643+0000 7fbba36cd640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:56:18 localhost podman[314142]: 2025-10-05 09:56:18.644090166 +0000 UTC m=+0.109972080 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Oct 5 05:56:18 localhost ceph-mgr[301561]: [rbd_support INFO root] setup complete Oct 5 05:56:18 localhost podman[314142]: 2025-10-05 09:56:18.69573916 +0000 UTC m=+0.161621034 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 05:56:18 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:56:18 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/4001180372' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: Activating manager daemon np0005471150.zwqxye Oct 5 05:56:18 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/4001180372' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:56:18 localhost ceph-mon[308154]: Manager daemon np0005471150.zwqxye is now available Oct 5 05:56:18 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/mirror_snapshot_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/mirror_snapshot_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/trash_purge_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/trash_purge_schedule"} : dispatch Oct 5 05:56:18 localhost sshd[314266]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:56:18 localhost systemd-logind[760]: New session 73 of user ceph-admin. Oct 5 05:56:18 localhost systemd[1]: Started Session 73 of User ceph-admin. Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/618585936' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 05:56:18 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 05:56:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/618585936' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 05:56:19 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:19 localhost ceph-mgr[301561]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:56:19] ENGINE Bus STARTING Oct 5 05:56:19 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:56:19] ENGINE Bus STARTING Oct 5 05:56:19 localhost ceph-mgr[301561]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:56:19] ENGINE Serving on http://172.18.0.106:8765 Oct 5 05:56:19 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:56:19] ENGINE Serving on http://172.18.0.106:8765 Oct 5 05:56:19 localhost systemd[1]: tmp-crun.skD7C7.mount: Deactivated successfully. Oct 5 05:56:19 localhost podman[314388]: 2025-10-05 09:56:19.950207422 +0000 UTC m=+0.098660995 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 5 05:56:20 localhost ceph-mgr[301561]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:56:20] ENGINE Serving on https://172.18.0.106:7150 Oct 5 05:56:20 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:56:20] ENGINE Serving on https://172.18.0.106:7150 Oct 5 05:56:20 localhost ceph-mgr[301561]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:56:20] ENGINE Bus STARTED Oct 5 05:56:20 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:56:20] ENGINE Bus STARTED Oct 5 05:56:20 localhost ceph-mgr[301561]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:56:20] ENGINE Client ('172.18.0.106', 51652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:56:20 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:56:20] ENGINE Client ('172.18.0.106', 51652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:56:20 localhost podman[314388]: 2025-10-05 09:56:20.069865543 +0000 UTC m=+0.218319166 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:20 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:56:20.456 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:56:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:56:20.458 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:56:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:56:20.460 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:56:20 localhost ceph-mgr[301561]: [devicehealth INFO root] Check health Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:20 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:21 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:21 localhost ceph-mon[308154]: [05/Oct/2025:09:56:19] ENGINE Bus STARTING Oct 5 05:56:21 localhost ceph-mon[308154]: [05/Oct/2025:09:56:19] ENGINE Serving on http://172.18.0.106:8765 Oct 5 05:56:21 localhost ceph-mon[308154]: [05/Oct/2025:09:56:20] ENGINE Serving on https://172.18.0.106:7150 Oct 5 05:56:21 localhost ceph-mon[308154]: [05/Oct/2025:09:56:20] ENGINE Bus STARTED Oct 5 05:56:21 localhost ceph-mon[308154]: [05/Oct/2025:09:56:20] ENGINE Client ('172.18.0.106', 51652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost podman[248506]: time="2025-10-05T09:56:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:56:21 localhost podman[248506]: @ - - [05/Oct/2025:09:56:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:56:21 localhost podman[248506]: @ - - [05/Oct/2025:09:56:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18841 "" "Go-http-client/1.1" Oct 5 05:56:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:56:21 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:21 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:21 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} v 0) Oct 5 05:56:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:56:21 localhost systemd[1]: tmp-crun.aBL6ut.mount: Deactivated successfully. Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.919287) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658181919373, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 680, "num_deletes": 256, "total_data_size": 2229826, "memory_usage": 2302288, "flush_reason": "Manual Compaction"} Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Oct 5 05:56:21 localhost podman[314613]: 2025-10-05 09:56:21.923855752 +0000 UTC m=+0.097372951 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658181930714, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 1461678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16523, "largest_seqno": 17198, "table_properties": {"data_size": 1458023, "index_size": 1382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9457, "raw_average_key_size": 20, "raw_value_size": 1449919, "raw_average_value_size": 3078, "num_data_blocks": 54, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658166, "oldest_key_time": 1759658166, "file_creation_time": 1759658181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11532 microseconds, and 4650 cpu microseconds. Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.930815) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 1461678 bytes OK Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.930867) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933131) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933159) EVENT_LOG_v1 {"time_micros": 1759658181933149, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933204) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2225806, prev total WAL file size 2225806, number of live WAL files 2. Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.935521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323632' seq:72057594037927935, type:22 .. '6B760031353138' seq:0, type:0; will stop at (end) Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(1427KB)], [24(16MB)] Oct 5 05:56:21 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658181935595, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 18999066, "oldest_snapshot_seqno": -1} Oct 5 05:56:21 localhost podman[314613]: 2025-10-05 09:56:21.936844162 +0000 UTC m=+0.110361311 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:56:21 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:56:22 localhost openstack_network_exporter[250601]: ERROR 09:56:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:22 localhost openstack_network_exporter[250601]: ERROR 09:56:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:22 localhost openstack_network_exporter[250601]: ERROR 09:56:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:56:22 localhost openstack_network_exporter[250601]: ERROR 09:56:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:56:22 localhost openstack_network_exporter[250601]: Oct 5 05:56:22 localhost openstack_network_exporter[250601]: ERROR 09:56:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:56:22 localhost openstack_network_exporter[250601]: Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11191 keys, 17845226 bytes, temperature: kUnknown Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658182066017, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 17845226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17778847, "index_size": 37286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28037, "raw_key_size": 300113, "raw_average_key_size": 26, "raw_value_size": 17585465, "raw_average_value_size": 1571, "num_data_blocks": 1416, "num_entries": 11191, "num_filter_entries": 11191, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.066345) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 17845226 bytes Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.6 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 16.7 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(25.2) write-amplify(12.2) OK, records in: 11741, records dropped: 550 output_compression: NoCompression Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067935) EVENT_LOG_v1 {"time_micros": 1759658182067922, "job": 12, "event": "compaction_finished", "compaction_time_micros": 130532, "compaction_time_cpu_micros": 41969, "output_level": 6, "num_output_files": 1, "total_output_size": 17845226, "num_input_records": 11741, "num_output_records": 11191, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658182068305, "job": 12, "event": "table_file_deletion", "file_number": 26} Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658182070934, "job": 12, "event": "table_file_deletion", "file_number": 24} Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:21.934238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.071079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.071087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.071090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.071093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:56:22.071096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:22 localhost nova_compute[297021]: 2025-10-05 09:56:22.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO root] Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO root] Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO root] Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:56:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:22 localhost nova_compute[297021]: 2025-10-05 09:56:22.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:22 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost podman[314741]: 2025-10-05 09:56:22.845050555 +0000 UTC m=+0.092159170 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:56:22 localhost podman[314741]: 2025-10-05 09:56:22.879813933 +0000 UTC m=+0.126922558 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git) Oct 5 05:56:22 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:23 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mgr.np0005471148.fayrer 172.18.0.105:0/3877410683; not ready for session (expect reconnect) Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471148.fayrer", "id": "np0005471148.fayrer"} v 0) Oct 5 05:56:24 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr metadata", "who": "np0005471148.fayrer", "id": "np0005471148.fayrer"} : dispatch Oct 5 05:56:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:24 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:24 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:56:25 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 4f14b0fd-fe5f-4227-95cb-71c7afef4792 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:56:25 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 4f14b0fd-fe5f-4227-95cb-71c7afef4792 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:56:25 localhost ceph-mgr[301561]: [progress INFO root] Completed event 4f14b0fd-fe5f-4227-95cb-71c7afef4792 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:56:25 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:25 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:25 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:56:25 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:56:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:26 localhost ceph-mon[308154]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:56:26 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:26 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:56:26 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Oct 5 05:56:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:26 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:56:26 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:56:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:56:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:56:26 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:56:26 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:26 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:26 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:56:26 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:56:27 localhost nova_compute[297021]: 2025-10-05 09:56:27.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:27 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:27 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:27 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:56:27 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:56:27 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:56:27 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:27 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:27 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:56:27 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:27 localhost nova_compute[297021]: 2025-10-05 09:56:27.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:28 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:28 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:28 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:56:28 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:56:28 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:56:28 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:28 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:28 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:28 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:56:28 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:56:28 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Oct 5 05:56:28 localhost ceph-mon[308154]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:56:28 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:56:28 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:28 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:28 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:28 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:28 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:56:28 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:56:28 localhost podman[315400]: Oct 5 05:56:28 localhost podman[315400]: 2025-10-05 09:56:28.829377225 +0000 UTC m=+0.079789426 container create 26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, architecture=x86_64, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True) Oct 5 05:56:28 localhost systemd[1]: Started libpod-conmon-26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1.scope. Oct 5 05:56:28 localhost systemd[1]: Started libcrun container. Oct 5 05:56:28 localhost podman[315400]: 2025-10-05 09:56:28.796278761 +0000 UTC m=+0.046690982 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:28 localhost podman[315400]: 2025-10-05 09:56:28.907279779 +0000 UTC m=+0.157691970 container init 26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:28 localhost podman[315400]: 2025-10-05 09:56:28.918188843 +0000 UTC m=+0.168601064 container start 26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:56:28 localhost recursing_pasteur[315415]: 167 167 Oct 5 05:56:28 localhost systemd[1]: libpod-26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1.scope: Deactivated successfully. Oct 5 05:56:28 localhost podman[315400]: 2025-10-05 09:56:28.919083367 +0000 UTC m=+0.169495628 container attach 26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Oct 5 05:56:28 localhost podman[315400]: 2025-10-05 09:56:28.926321162 +0000 UTC m=+0.176733403 container died 26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=553, version=7, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main) Oct 5 05:56:29 localhost podman[315420]: 2025-10-05 09:56:29.034456412 +0000 UTC m=+0.095202331 container remove 26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_pasteur, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64) Oct 5 05:56:29 localhost systemd[1]: libpod-conmon-26a4b99d12ce24a33bf2b2122850ae318d84cd5f590c828bd833d4eb1f560de1.scope: Deactivated successfully. Oct 5 05:56:29 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:29 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:29 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 5 05:56:29 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 5 05:56:29 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Oct 5 05:56:29 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:56:29 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:29 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:56:29 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:56:29 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:56:29 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:56:29 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:29 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:29 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:29 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:56:29 localhost podman[315491]: Oct 5 05:56:29 localhost podman[315491]: 2025-10-05 09:56:29.753483187 +0000 UTC m=+0.066516628 container create 9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_rhodes, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main) Oct 5 05:56:29 localhost systemd[1]: Started libpod-conmon-9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0.scope. Oct 5 05:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:56:29 localhost systemd[1]: Started libcrun container. Oct 5 05:56:29 localhost podman[315491]: 2025-10-05 09:56:29.722684285 +0000 UTC m=+0.035717766 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:29 localhost podman[315491]: 2025-10-05 09:56:29.82621764 +0000 UTC m=+0.139251101 container init 9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_rhodes, release=553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12) Oct 5 05:56:29 localhost podman[315491]: 2025-10-05 09:56:29.83622632 +0000 UTC m=+0.149259761 container start 9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_rhodes, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:29 localhost systemd[1]: var-lib-containers-storage-overlay-1869cd7facee5ee5c4e40a2927b2d2d52cda4ffe00aac92ad36df6ec7e8a9693-merged.mount: Deactivated successfully. Oct 5 05:56:29 localhost podman[315491]: 2025-10-05 09:56:29.837830384 +0000 UTC m=+0.150863865 container attach 9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_rhodes, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 05:56:29 localhost gallant_rhodes[315507]: 167 167 Oct 5 05:56:29 localhost systemd[1]: libpod-9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0.scope: Deactivated successfully. Oct 5 05:56:29 localhost podman[315491]: 2025-10-05 09:56:29.842219602 +0000 UTC m=+0.155253063 container died 9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_rhodes, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:56:29 localhost systemd[1]: tmp-crun.spycHL.mount: Deactivated successfully. Oct 5 05:56:29 localhost systemd[1]: var-lib-containers-storage-overlay-34bd81034601145388408193c8af9fbf5020dec028169ffce17ad9e15c6427ef-merged.mount: Deactivated successfully. Oct 5 05:56:29 localhost podman[315521]: 2025-10-05 09:56:29.94845146 +0000 UTC m=+0.096558448 container remove 9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_rhodes, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vcs-type=git, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7) Oct 5 05:56:29 localhost systemd[1]: libpod-conmon-9273bed90c80b85823897f5f2d5f150c37e571f274d736a09af07c3e37f2f4a0.scope: Deactivated successfully. Oct 5 05:56:29 localhost podman[315509]: 2025-10-05 09:56:29.918328498 +0000 UTC m=+0.104743220 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:56:30 localhost podman[315509]: 2025-10-05 09:56:30.001890503 +0000 UTC m=+0.188305285 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:56:30 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:56:30 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.27580 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:56:30 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:30 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:30 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 5 05:56:30 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 5 05:56:30 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 5 05:56:30 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:56:30 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:30 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:30 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:56:30 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:56:30 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 5 05:56:30 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:56:30 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:56:30 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:30 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:30 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:56:30 localhost podman[315610]: Oct 5 05:56:30 localhost podman[315610]: 2025-10-05 09:56:30.831659978 +0000 UTC m=+0.084692728 container create 6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_brahmagupta, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, release=553, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55) Oct 5 05:56:30 localhost systemd[1]: Started libpod-conmon-6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3.scope. Oct 5 05:56:30 localhost systemd[1]: Started libcrun container. Oct 5 05:56:30 localhost podman[315610]: 2025-10-05 09:56:30.797575897 +0000 UTC m=+0.050608667 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:30 localhost podman[315610]: 2025-10-05 09:56:30.900485396 +0000 UTC m=+0.153518136 container init 6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_brahmagupta, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, distribution-scope=public, maintainer=Guillaume Abrioux ) Oct 5 05:56:30 localhost podman[315610]: 2025-10-05 09:56:30.911307978 +0000 UTC m=+0.164340758 container start 6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_brahmagupta, version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vendor=Red Hat, Inc., name=rhceph) Oct 5 05:56:30 localhost podman[315610]: 2025-10-05 09:56:30.911622797 +0000 UTC m=+0.164685137 container attach 6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_brahmagupta, description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553) Oct 5 05:56:30 localhost quirky_brahmagupta[315625]: 167 167 Oct 5 05:56:30 localhost systemd[1]: libpod-6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3.scope: Deactivated successfully. Oct 5 05:56:30 localhost podman[315610]: 2025-10-05 09:56:30.921420522 +0000 UTC m=+0.174453312 container died 6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_brahmagupta, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , RELEASE=main, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 5 05:56:31 localhost podman[315630]: 2025-10-05 09:56:31.020903317 +0000 UTC m=+0.090780522 container remove 6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_brahmagupta, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-type=git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553) Oct 5 05:56:31 localhost systemd[1]: libpod-conmon-6af99c13c1bf5b22d371b0d00b8a4dfd2c500da2f4ce21e3f6aed4edc26c0ec3.scope: Deactivated successfully. Oct 5 05:56:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:31 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:56:31 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:56:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:56:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:31 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:31 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:31 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:56:31 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:56:31 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:56:31 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:56:31 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:31 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:31 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:31 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:31 localhost podman[315706]: Oct 5 05:56:31 localhost podman[315706]: 2025-10-05 09:56:31.830010604 +0000 UTC m=+0.065152660 container create deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_einstein, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:56:31 localhost systemd[1]: var-lib-containers-storage-overlay-cc7709c404633992e060441fb9d07e1c186686c30f86695bf0b3733878991662-merged.mount: Deactivated successfully. Oct 5 05:56:31 localhost systemd[1]: Started libpod-conmon-deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e.scope. Oct 5 05:56:31 localhost podman[315706]: 2025-10-05 09:56:31.795878042 +0000 UTC m=+0.031020138 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:31 localhost systemd[1]: Started libcrun container. Oct 5 05:56:31 localhost podman[315706]: 2025-10-05 09:56:31.909346755 +0000 UTC m=+0.144488811 container init deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_einstein, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, vcs-type=git, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:56:31 localhost podman[315706]: 2025-10-05 09:56:31.920516367 +0000 UTC m=+0.155658423 container start deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_einstein, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:56:31 localhost podman[315706]: 2025-10-05 09:56:31.920858886 +0000 UTC m=+0.156001002 container attach deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_einstein, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, version=7, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64) Oct 5 05:56:31 localhost quizzical_einstein[315721]: 167 167 Oct 5 05:56:31 localhost systemd[1]: libpod-deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e.scope: Deactivated successfully. Oct 5 05:56:31 localhost podman[315706]: 2025-10-05 09:56:31.924791843 +0000 UTC m=+0.159933949 container died deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_einstein, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Oct 5 05:56:32 localhost podman[315726]: 2025-10-05 09:56:32.023226291 +0000 UTC m=+0.086173688 container remove deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_einstein, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:56:32 localhost systemd[1]: libpod-conmon-deb0f18e2252fbae0e0f291ccbc2e3ae7754f495cac1135455b1bd211243008e.scope: Deactivated successfully. Oct 5 05:56:32 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:32 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:32 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:56:32 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:56:32 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:56:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:32 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:56:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:56:32 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:32 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:56:32 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:56:32 localhost nova_compute[297021]: 2025-10-05 09:56:32.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:32 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 5 05:56:32 localhost nova_compute[297021]: 2025-10-05 09:56:32.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:32 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:56:32 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:56:32 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:32 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:32 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:32 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:32 localhost podman[315796]: Oct 5 05:56:32 localhost podman[315796]: 2025-10-05 09:56:32.740721413 +0000 UTC m=+0.083892636 container create f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_lalande, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git) Oct 5 05:56:32 localhost systemd[1]: Started libpod-conmon-f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779.scope. Oct 5 05:56:32 localhost systemd[1]: Started libcrun container. Oct 5 05:56:32 localhost podman[315796]: 2025-10-05 09:56:32.804743322 +0000 UTC m=+0.147914535 container init f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_lalande, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:56:32 localhost podman[315796]: 2025-10-05 09:56:32.705736549 +0000 UTC m=+0.048907832 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:32 localhost podman[315796]: 2025-10-05 09:56:32.81357043 +0000 UTC m=+0.156741643 container start f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_lalande, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:56:32 localhost podman[315796]: 2025-10-05 09:56:32.813870198 +0000 UTC m=+0.157041451 container attach f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_lalande, build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, ceph=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph) Oct 5 05:56:32 localhost quizzical_lalande[315812]: 167 167 Oct 5 05:56:32 localhost systemd[1]: libpod-f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779.scope: Deactivated successfully. Oct 5 05:56:32 localhost podman[315796]: 2025-10-05 09:56:32.817238319 +0000 UTC m=+0.160409562 container died f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_lalande, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Oct 5 05:56:32 localhost systemd[1]: var-lib-containers-storage-overlay-96bf32c2ad88739cadfcef6cb8d2149ca0b1ee9b8df12e194d64898ef62f7c7e-merged.mount: Deactivated successfully. Oct 5 05:56:32 localhost systemd[1]: var-lib-containers-storage-overlay-ae1c438b44defd358260c4dc4abd53218ccf5163c29a4dc74318f00d6d50c309-merged.mount: Deactivated successfully. Oct 5 05:56:32 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.27588 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005471148", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:56:32 localhost podman[315817]: 2025-10-05 09:56:32.92354904 +0000 UTC m=+0.095254513 container remove f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_lalande, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, release=553) Oct 5 05:56:32 localhost systemd[1]: libpod-conmon-f9c9a5f51beb68a27fa40286a0a65eb3d97127726a0f20e565905d03b9c8c779.scope: Deactivated successfully. Oct 5 05:56:32 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:33 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:56:33 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:33 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:56:33 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:56:33 localhost podman[315887]: Oct 5 05:56:33 localhost podman[315887]: 2025-10-05 09:56:33.627353113 +0000 UTC m=+0.080038062 container create 642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_carson, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, architecture=x86_64, ceph=True, vcs-type=git, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, distribution-scope=public, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:56:33 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:56:33 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:56:33 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:33 localhost ceph-mon[308154]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:33 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:33 localhost systemd[1]: Started libpod-conmon-642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03.scope. Oct 5 05:56:33 localhost systemd[1]: Started libcrun container. Oct 5 05:56:33 localhost podman[315887]: 2025-10-05 09:56:33.69242424 +0000 UTC m=+0.145109269 container init 642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_carson, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.33.12, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:56:33 localhost podman[315887]: 2025-10-05 09:56:33.59501509 +0000 UTC m=+0.047700089 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:33 localhost podman[315887]: 2025-10-05 09:56:33.701059963 +0000 UTC m=+0.153744962 container start 642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_carson, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 05:56:33 localhost podman[315887]: 2025-10-05 09:56:33.701905126 +0000 UTC m=+0.154590155 container attach 642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_carson, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Oct 5 05:56:33 localhost hardcore_carson[315902]: 167 167 Oct 5 05:56:33 localhost systemd[1]: libpod-642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03.scope: Deactivated successfully. Oct 5 05:56:33 localhost podman[315887]: 2025-10-05 09:56:33.726314135 +0000 UTC m=+0.178999064 container died 642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_carson, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 05:56:33 localhost podman[315907]: 2025-10-05 09:56:33.797537418 +0000 UTC m=+0.065361116 container remove 642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_carson, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:56:33 localhost systemd[1]: libpod-conmon-642c22762242fac4edbbbb7179194716c47fe2948d4992b0b5a85672d2853d03.scope: Deactivated successfully. Oct 5 05:56:33 localhost systemd[1]: var-lib-containers-storage-overlay-a333bb900d091b61c9a4ceb8e7b4529a4d64ffd8980dd44deb3014e4eac0d1c2-merged.mount: Deactivated successfully. Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:33 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:56:33 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:33 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:33 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:56:33 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:56:34 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 5 05:56:34 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.34529 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005471148"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:56:34 localhost ceph-mgr[301561]: [cephadm INFO root] Remove daemons mon.np0005471148 Oct 5 05:56:34 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005471148 Oct 5 05:56:34 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "quorum_status"} v 0) Oct 5 05:56:34 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "quorum_status"} : dispatch Oct 5 05:56:34 localhost ceph-mgr[301561]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005471148: new quorum should be ['np0005471152', 'np0005471150', 'np0005471151'] (from ['np0005471152', 'np0005471150', 'np0005471151']) Oct 5 05:56:34 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005471148: new quorum should be ['np0005471152', 'np0005471150', 'np0005471151'] (from ['np0005471152', 'np0005471150', 'np0005471151']) Oct 5 05:56:34 localhost ceph-mgr[301561]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005471148 from monmap... Oct 5 05:56:34 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing monitor np0005471148 from monmap... Oct 5 05:56:34 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e12 handle_command mon_command({"prefix": "mon rm", "name": "np0005471148"} v 0) Oct 5 05:56:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon rm", "name": "np0005471148"} : dispatch Oct 5 05:56:34 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:34 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:34 localhost ceph-mon[308154]: mon.np0005471150@2(peon) e13 my rank is now 1 (was 2) Oct 5 05:56:34 localhost ceph-mgr[301561]: client.34501 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 5 05:56:34 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:56:34 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:56:34 localhost ceph-mgr[301561]: client.27552 ms_handle_reset on v2:172.18.0.105:3300/0 Oct 5 05:56:34 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:56:34 localhost ceph-mon[308154]: paxos.1).electionLogic(48) init, last seen epoch 48 Oct 5 05:56:34 localhost ceph-mon[308154]: mon.np0005471150@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:34 localhost ceph-mon[308154]: mon.np0005471150@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:34 localhost podman[315924]: 2025-10-05 09:56:34.6705404 +0000 UTC m=+0.074514804 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:56:34 localhost podman[315924]: 2025-10-05 09:56:34.685813592 +0000 UTC m=+0.089787996 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:56:34 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:56:36 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 5 05:56:37 localhost nova_compute[297021]: 2025-10-05 09:56:37.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:37 localhost nova_compute[297021]: 2025-10-05 09:56:37.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:38 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.837 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.838 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.862 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f0351416-f6e8-4df3-abe3-9046521098b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.838906', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9632b3d2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': '063791ed322c0a64ffa7bb5a646cf73d6c082592e1455d159f1f87507b99b33b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.838906', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9632c976-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': 'e3fa4e135785734e82fa661e7cb09dc2ab9f703f8c199c7e9b67272c96a96a04'}]}, 'timestamp': '2025-10-05 09:56:38.863568', '_unique_id': 'e002e73cd9074cec9203433291d619fb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.866 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.866 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.877 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.877 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d68c6fe-becb-4166-bba8-20dbb177d245', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.867055', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9634fa34-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.09097713, 'message_signature': '0832bdc170f08a3db1d938031b5e995e930d389ed11191b71f15de5425d05889'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.867055', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96350c0e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.09097713, 'message_signature': '751f8bdbda4ea54fed44e6cc8cd8054e696896f67efbafa4b6452a5c48e2b097'}]}, 'timestamp': '2025-10-05 09:56:38.878359', '_unique_id': '4f38a7d666a646e98c21ae64d0d0f49a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.880 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e2610bb9-67b1-42aa-90cf-4a84d88cc24b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.880812', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '96361d2e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': 'b29e98d51e550d35d4faecd703d41b1315e15b24c163f2c7ae95a2b32757c9b0'}]}, 'timestamp': '2025-10-05 09:56:38.885444', '_unique_id': 'c2941a46a90442089c1367372d1f5d5d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.886 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.888 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e17239b3-1eda-45f4-8a81-404f724f322d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.888257', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '9636a640-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '2d6a9e5dc4f9b8df659416e70c4f8172fb71bb3ea31dc81a1b00d7ef97679450'}]}, 'timestamp': '2025-10-05 09:56:38.888864', '_unique_id': '68e98370ca77437dbf595538e4b815c4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.891 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.891 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec7d9203-7301-441c-9a07-5ca0838ba172', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.891446', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '96371eea-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': 'd541ca291de592b77b1c99d57c26ce7aee54f3250af990d1c02a1cf85b52901b'}]}, 'timestamp': '2025-10-05 09:56:38.891953', '_unique_id': '0c5a62979a08475bb6774c00aa4e01d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.892 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.894 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.894 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.894 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3a623b38-810b-4cae-a3de-ebba5800ecf9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.894274', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96378e5c-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': '50d1e5b7ab831b2b10531adc2e4bdbafa53f3a1baac80db1a1a76937031ce3c0'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.894274', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96379fbe-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': '3051ff396d20d9ce55d400a6ec594d7a58c9c947ef69d07ab44d0d0888d71422'}]}, 'timestamp': '2025-10-05 09:56:38.895216', '_unique_id': '2ddd0eff07904ffca6cdea330b73bd9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.897 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.897 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1bbf4b64-47e7-4db3-877c-2ea8b674015c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.897635', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '96381020-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '76917d367604cc646b0825e6e53ca4d4726822fd21124320d100b497d3ecdd80'}]}, 'timestamp': '2025-10-05 09:56:38.898147', '_unique_id': 'f9f117aa2e1a43c3bb8561167e6ab439'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.900 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.900 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec64ae9c-0d6c-458c-ad08-e4a603a732fe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.900508', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '96388226-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': 'd1e5f3b5d4645f8b8528c45eec756e5cf25c1dd849b2daea90ce46d59f213c1a'}]}, 'timestamp': '2025-10-05 09:56:38.901045', '_unique_id': '587547e1b9944b9ba8684d79ba80ff01'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.903 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.903 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1d642a2-ee83-4255-b272-9adad4e99e99', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.903533', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '9638f7c4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '9b85d91c2146f576f187d4682a852faefd2ebe08a0b1325d0805e73c4bdc81f4'}]}, 'timestamp': '2025-10-05 09:56:38.904094', '_unique_id': '6d482e4dbc094faba3bb42c9efe53ced'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.906 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.906 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dba9cd09-4ea9-40a7-9e4a-3e6c71591cd3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.906861', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '963978f2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '8fc2861676e22771d78fd67da0c4088d48c46ba733410b32e55aed9fdb020b89'}]}, 'timestamp': '2025-10-05 09:56:38.907361', '_unique_id': 'eda3750b24ba462d8f2951bdab0c047e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.909 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8df942a8-7751-4935-9c1c-d661244cd3c2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:56:38.910245', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '963c6878-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.14986404, 'message_signature': '13aff577edbbf74a404089ef3beb7ffb32695634174866b2fbd15bbfdcb6c178'}]}, 'timestamp': '2025-10-05 09:56:38.926629', '_unique_id': '236c236a4c42464c878c725f4a60b00c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.928 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd344f5dc-7dff-4120-9a72-09e39e6d9b4d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.929016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '963cd9f2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': 'e941c4da00ef471d3c8ac7acecefdd07fe485fa85a6ef5de1003dd615bdbde91'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.929016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '963cec94-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': 'acb0ecde2f470be69dd059950d53906379e800e2846bab1a6fce08b42d4936c6'}]}, 'timestamp': '2025-10-05 09:56:38.929990', '_unique_id': '95613b6dc37a4ad4aa295b7d203a8b3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.931 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.932 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.932 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '33b9bcfd-99e2-4c8e-9c97-a757fb0ecfbb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.932611', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '963d66c4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '5e6a688480aa8690aa2a9b63f43bfe2ccfd324142025d37fbec2e5e8a65a4df7'}]}, 'timestamp': '2025-10-05 09:56:38.933107', '_unique_id': '0b7f9c56843b47e88b2be4e2645c01c2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.935 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d7fb71f-f89b-45c2-b21e-c26e412430b6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.935472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '963dd618-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.09097713, 'message_signature': '60cee2e2dd07858797abde1b8bfdea3735daae820d9bb79f7b0524b4a87905bb'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.935472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '963de70c-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.09097713, 'message_signature': 'fdf353c8931607c43c4de51851f4d8df7792b3ca76fe5cb53e656642eb1d3403'}]}, 'timestamp': '2025-10-05 09:56:38.936368', '_unique_id': '7b517c31782d4272b74a00d6dfe93d85'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.939 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cdd6e4af-a459-4d7f-9aa4-75caa14a71c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.938874', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '963e5ae8-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': '90cf999bebcde7bf646086d93982ee8a756c72106297630c70be78cfd8c212fd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.938874', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '963e6d62-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': '63731f2258e1e34436d1ee58339e9e11baea0a649d7cc9c58564c2395c05afb5'}]}, 'timestamp': '2025-10-05 09:56:38.939801', '_unique_id': '43c89de7bf034322a7b2a25bf7a3f89c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8cf63f1a-da86-4bcc-b9ce-de043aacb31a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.942075', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '963ed7d4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': 'd360a54dc3c77fce55b9455826a21d60c176e3f426764b97f28be2673c1eb4b4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.942075', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '963eea76-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': 'ab87e71e2f93c35f220cf6a785320e5e2e17b33849b0ac0a0eb61eafd4af01ee'}]}, 'timestamp': '2025-10-05 09:56:38.943001', '_unique_id': '3191d695704440f1b834e0585c0b7105'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '896fb471-ed89-4b23-a32e-669d8349f6d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.945239', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '963f54ca-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '88c9c6be910fa851660abf1ea4af04fdecd5a3ee089063bc1cff2f206751c414'}]}, 'timestamp': '2025-10-05 09:56:38.945756', '_unique_id': 'f56ce15f458a4609be96606394c36076'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2efc3de2-ef22-4fee-a63a-59f1d527952c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.947963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '963fbeb0-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.09097713, 'message_signature': 'f1bc1a93e2797baf2d0a355781c8e451ba336e238e10375463a117d092e81277'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.947963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '963fd0ee-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.09097713, 'message_signature': 'cb03710ae7310cbfccc274c9c67750c076ffe8d88ff618caf555b1cb2b605b5f'}]}, 'timestamp': '2025-10-05 09:56:38.948921', '_unique_id': '9bc6fe1c15f2417280e488a5ec087558'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd30d3d30-1ce4-4f3b-aab3-c434607c0158', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:56:38.951439', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '964045b0-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.104728512, 'message_signature': '8f4dbf7c5800c6d020c99b6fe47748aab697fae05286f084d56a2c184c9c1947'}]}, 'timestamp': '2025-10-05 09:56:38.951925', '_unique_id': '941189ee64914979b9edeb8f21db774e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5648136f-6cec-4f13-8ebe-8fbf565e7d03', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:56:38.954179', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9640b14e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': '86978acd362268f8731ec6f7cb9d55ad445dd62adba7b6b87ae56dffac87d021'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:56:38.954179', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9640c44a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.06282376, 'message_signature': 'a66edcee4385f4b5b862f440bab1dfb85c5a10662a4559e2511752e3809fb731'}]}, 'timestamp': '2025-10-05 09:56:38.955122', '_unique_id': 'a27a7f2c423140a4bfeccb8213954424'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.957 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 13690000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b234e0b5-0dd0-4121-9d2f-1f748867c1ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13690000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:56:38.957458', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '964131fa-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11678.14986404, 'message_signature': 'e35bb8be99a48666cf80a38ae0527dbf4e33888a9b8f735ec4ebd89e8bc1ce47'}]}, 'timestamp': '2025-10-05 09:56:38.958292', '_unique_id': 'e379ff6a84b0405ca24764d62d27b30c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:56:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:56:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 05:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:56:39 localhost ceph-mon[308154]: mon.np0005471150@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:39 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 5 05:56:39 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 5 05:56:39 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:56:39 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:56:39 localhost ceph-mon[308154]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:56:39 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:56:39 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:56:39 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:39 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:56:39 localhost ceph-mon[308154]: Remove daemons mon.np0005471148 Oct 5 05:56:39 localhost ceph-mon[308154]: Safe to remove mon.np0005471148: new quorum should be ['np0005471152', 'np0005471150', 'np0005471151'] (from ['np0005471152', 'np0005471150', 'np0005471151']) Oct 5 05:56:39 localhost ceph-mon[308154]: Removing monitor np0005471148 from monmap... Oct 5 05:56:39 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon rm", "name": "np0005471148"} : dispatch Oct 5 05:56:39 localhost ceph-mon[308154]: Removing daemon mon.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:39 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:56:39 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:56:39 localhost ceph-mon[308154]: mon.np0005471152 is new leader, mons np0005471152,np0005471150 in quorum (ranks 0,1) Oct 5 05:56:39 localhost ceph-mon[308154]: Health check failed: 1/3 mons down, quorum np0005471152,np0005471150 (MON_DOWN) Oct 5 05:56:39 localhost ceph-mon[308154]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005471152,np0005471150 Oct 5 05:56:39 localhost ceph-mon[308154]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005471152,np0005471150 Oct 5 05:56:39 localhost ceph-mon[308154]: mon.np0005471151 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Oct 5 05:56:39 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:39 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:39 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:56:39 localhost systemd[1]: tmp-crun.Mhw04U.mount: Deactivated successfully. Oct 5 05:56:39 localhost podman[315948]: 2025-10-05 09:56:39.701138039 +0000 UTC m=+0.099544739 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:56:39 localhost podman[315947]: 2025-10-05 09:56:39.754743776 +0000 UTC m=+0.155922431 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2) Oct 5 05:56:39 localhost podman[315947]: 2025-10-05 09:56:39.763517453 +0000 UTC m=+0.164696128 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, managed_by=edpm_ansible) Oct 5 05:56:39 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:56:39 localhost podman[315948]: 2025-10-05 09:56:39.818745745 +0000 UTC m=+0.217152435 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd) Oct 5 05:56:39 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:56:40 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:40 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Oct 5 05:56:40 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Oct 5 05:56:40 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:56:40 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:56:40 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:56:40 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:56:40 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:40 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:40 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:56:40 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:56:40 localhost ceph-mon[308154]: paxos.1).electionLogic(51) init, last seen epoch 51, mid-election, bumping Oct 5 05:56:40 localhost ceph-mon[308154]: mon.np0005471150@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:40 localhost ceph-mon[308154]: mon.np0005471150@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:40 localhost ceph-mon[308154]: mon.np0005471150@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:41 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:41 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44413 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005471148.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:56:41 localhost ceph-mgr[301561]: [cephadm INFO root] Removed label mon from host np0005471148.localdomain Oct 5 05:56:41 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removed label mon from host np0005471148.localdomain Oct 5 05:56:41 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:56:41 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:56:41 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:56:41 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:56:41 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:56:41 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:56:41 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:56:41 localhost ceph-mon[308154]: mon.np0005471152 is new leader, mons np0005471152,np0005471150,np0005471151 in quorum (ranks 0,1,2) Oct 5 05:56:41 localhost ceph-mon[308154]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005471152,np0005471150) Oct 5 05:56:41 localhost ceph-mon[308154]: Cluster is now healthy Oct 5 05:56:41 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:56:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:42 localhost nova_compute[297021]: 2025-10-05 09:56:42.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:42 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:56:42 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:56:42 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:56:42 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:56:42 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:42 localhost nova_compute[297021]: 2025-10-05 09:56:42.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:42 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44419 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005471148.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:56:42 localhost ceph-mgr[301561]: [cephadm INFO root] Removed label mgr from host np0005471148.localdomain Oct 5 05:56:42 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005471148.localdomain Oct 5 05:56:42 localhost ceph-mon[308154]: Removed label mon from host np0005471148.localdomain Oct 5 05:56:42 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:56:42 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:56:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:56:43 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:56:43 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:56:43 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:56:43 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:56:43 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:56:43 localhost ceph-mon[308154]: Removed label mgr from host np0005471148.localdomain Oct 5 05:56:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[308154]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:56:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:43 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:56:43 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44422 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005471148.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:56:43 localhost ceph-mgr[301561]: [cephadm INFO root] Removed label _admin from host np0005471148.localdomain Oct 5 05:56:43 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005471148.localdomain Oct 5 05:56:44 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:56:44 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:56:44 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:56:44 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:56:44 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[308154]: Removed label _admin from host np0005471148.localdomain Oct 5 05:56:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:56:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:44 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:56:45 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 5 05:56:45 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 5 05:56:45 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:56:45 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:56:46 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 5 05:56:46 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 5 05:56:46 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:46 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:56:46 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:56:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:56:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:56:46 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:56:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:56:46 localhost podman[315985]: 2025-10-05 09:56:46.676324413 +0000 UTC m=+0.083399233 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:56:46 localhost podman[315985]: 2025-10-05 09:56:46.685962043 +0000 UTC m=+0.093036903 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:56:46 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:56:47 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:56:47 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:56:47 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:56:47 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:56:47 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:56:47 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:56:47 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:56:47 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:47 localhost nova_compute[297021]: 2025-10-05 09:56:47.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:47 localhost nova_compute[297021]: 2025-10-05 09:56:47.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:47 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:56:47 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:56:47 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:56:47 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:56:48 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:56:48 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:56:48 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:48 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:48 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:48 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:48 localhost ceph-mgr[301561]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:56:48 localhost ceph-mgr[301561]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:56:48 localhost ceph-mgr[301561]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:56:48 localhost ceph-mgr[301561]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:56:48 localhost ceph-mgr[301561]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:56:48 localhost ceph-mgr[301561]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:56:48 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:56:48 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:56:48 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:56:48 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:56:49 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:56:49 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:56:49 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:49 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:49 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:56:49 localhost podman[316002]: 2025-10-05 09:56:49.684962318 +0000 UTC m=+0.086174548 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:56:49 localhost podman[316002]: 2025-10-05 09:56:49.755101722 +0000 UTC m=+0.156313942 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:56:49 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:56:50 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:50 localhost ceph-mon[308154]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:56:50 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:56:50 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:50 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:51 localhost podman[248506]: time="2025-10-05T09:56:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:56:51 localhost podman[248506]: @ - - [05/Oct/2025:09:56:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:56:51 localhost podman[248506]: @ - - [05/Oct/2025:09:56:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18850 "" "Go-http-client/1.1" Oct 5 05:56:51 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:51 localhost ceph-mon[308154]: Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mon[308154]: Removing np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mon[308154]: Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:52 localhost openstack_network_exporter[250601]: ERROR 09:56:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:56:52 localhost openstack_network_exporter[250601]: ERROR 09:56:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:52 localhost openstack_network_exporter[250601]: ERROR 09:56:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:52 localhost openstack_network_exporter[250601]: ERROR 09:56:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:56:52 localhost openstack_network_exporter[250601]: Oct 5 05:56:52 localhost openstack_network_exporter[250601]: ERROR 09:56:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:56:52 localhost openstack_network_exporter[250601]: Oct 5 05:56:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:56:52 localhost podman[316223]: 2025-10-05 09:56:52.200696706 +0000 UTC m=+0.098702197 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:56:52 localhost podman[316223]: 2025-10-05 09:56:52.238746653 +0000 UTC m=+0.136752164 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:56:52 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:56:52 localhost nova_compute[297021]: 2025-10-05 09:56:52.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:52 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:52 localhost nova_compute[297021]: 2025-10-05 09:56:52.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:52 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 8073cf0c-3220-4a75-9f33-9ac295e718b5 (Updating mgr deployment (-1 -> 3)) Oct 5 05:56:52 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005471148.fayrer from np0005471148.localdomain -- ports [8765] Oct 5 05:56:52 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005471148.fayrer from np0005471148.localdomain -- ports [8765] Oct 5 05:56:53 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:53 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:53 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[308154]: Removing daemon mgr.np0005471148.fayrer from np0005471148.localdomain -- ports [8765] Oct 5 05:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:56:53 localhost podman[316366]: 2025-10-05 09:56:53.679435212 +0000 UTC m=+0.086043953 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container) Oct 5 05:56:53 localhost podman[316366]: 2025-10-05 09:56:53.694623313 +0000 UTC m=+0.101231994 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:56:53 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:56:54 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:54 localhost ceph-mgr[301561]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005471148.fayrer Oct 5 05:56:54 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005471148.fayrer Oct 5 05:56:54 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 8073cf0c-3220-4a75-9f33-9ac295e718b5 (Updating mgr deployment (-1 -> 3)) Oct 5 05:56:54 localhost ceph-mgr[301561]: [progress INFO root] Completed event 8073cf0c-3220-4a75-9f33-9ac295e718b5 (Updating mgr deployment (-1 -> 3)) in 2 seconds Oct 5 05:56:54 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 21a53c73-f162-4880-a792-0b82b5dc67dd (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:56:54 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 21a53c73-f162-4880-a792-0b82b5dc67dd (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:56:54 localhost ceph-mgr[301561]: [progress INFO root] Completed event 21a53c73-f162-4880-a792-0b82b5dc67dd (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:56:55 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"} : dispatch Oct 5 05:56:55 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"}]': finished Oct 5 05:56:55 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:55 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:55 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44428 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005471148.localdomain", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:56:55 localhost ceph-mgr[301561]: [cephadm INFO root] Added label _no_schedule to host np0005471148.localdomain Oct 5 05:56:55 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005471148.localdomain Oct 5 05:56:55 localhost ceph-mgr[301561]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471148.localdomain Oct 5 05:56:55 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471148.localdomain Oct 5 05:56:56 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:56 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:56 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 3c4440f2-92d4-4c19-b49b-94ecb277f031 (Updating crash deployment (-1 -> 3)) Oct 5 05:56:56 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing daemon crash.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:56 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing daemon crash.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:56 localhost ceph-mon[308154]: Removing key for mgr.np0005471148.fayrer Oct 5 05:56:56 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[308154]: Added label _no_schedule to host np0005471148.localdomain Oct 5 05:56:56 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[308154]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471148.localdomain Oct 5 05:56:56 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:56 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:57 localhost nova_compute[297021]: 2025-10-05 09:56:57.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:57 localhost ceph-mon[308154]: Removing daemon crash.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:57 localhost nova_compute[297021]: 2025-10-05 09:56:57.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:56:58 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:56:58 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:56:58 localhost ceph-mgr[301561]: [cephadm INFO cephadm.services.cephadmservice] Removing key for client.crash.np0005471148.localdomain Oct 5 05:56:58 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing key for client.crash.np0005471148.localdomain Oct 5 05:56:58 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 3c4440f2-92d4-4c19-b49b-94ecb277f031 (Updating crash deployment (-1 -> 3)) Oct 5 05:56:58 localhost ceph-mgr[301561]: [progress INFO root] Completed event 3c4440f2-92d4-4c19-b49b-94ecb277f031 (Updating crash deployment (-1 -> 3)) in 2 seconds Oct 5 05:56:58 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 2e7a8a57-3581-43d5-ad9f-9a2930d3a860 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:56:58 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 2e7a8a57-3581-43d5-ad9f-9a2930d3a860 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:56:58 localhost ceph-mgr[301561]: [progress INFO root] Completed event 2e7a8a57-3581-43d5-ad9f-9a2930d3a860 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:56:59 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[308154]: Removing key for client.crash.np0005471148.localdomain Oct 5 05:56:59 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"} : dispatch Oct 5 05:56:59 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"}]': finished Oct 5 05:56:59 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 76655929-b887-4480-9249-d1cacb80a7bd (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:56:59 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 76655929-b887-4480-9249-d1cacb80a7bd (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:56:59 localhost ceph-mgr[301561]: [progress INFO root] Completed event 76655929-b887-4480-9249-d1cacb80a7bd (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:57:00 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.34548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005471148.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:57:00 localhost systemd[1]: tmp-crun.MzTjmP.mount: Deactivated successfully. Oct 5 05:57:00 localhost podman[316440]: 2025-10-05 09:57:00.195254765 +0000 UTC m=+0.084935964 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:57:00 localhost podman[316440]: 2025-10-05 09:57:00.210791415 +0000 UTC m=+0.100472644 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:57:00 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:57:00 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:00 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:00 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:00 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:00 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:00 localhost podman[316516]: Oct 5 05:57:00 localhost podman[316516]: 2025-10-05 09:57:00.837805435 +0000 UTC m=+0.068041369 container create 0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_bell, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, version=7) Oct 5 05:57:00 localhost systemd[1]: Started libpod-conmon-0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a.scope. Oct 5 05:57:00 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:00 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:00 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:00 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:00 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:00 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:00 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:00 localhost systemd[1]: Started libcrun container. Oct 5 05:57:00 localhost podman[316516]: 2025-10-05 09:57:00.805809171 +0000 UTC m=+0.036045115 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:00 localhost podman[316516]: 2025-10-05 09:57:00.913110029 +0000 UTC m=+0.143345963 container init 0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_bell, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, name=rhceph, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:00 localhost podman[316516]: 2025-10-05 09:57:00.931921506 +0000 UTC m=+0.162157430 container start 0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_bell, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, RELEASE=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:00 localhost podman[316516]: 2025-10-05 09:57:00.932215174 +0000 UTC m=+0.162451098 container attach 0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_bell, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, release=553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7) Oct 5 05:57:00 localhost festive_bell[316531]: 167 167 Oct 5 05:57:00 localhost systemd[1]: libpod-0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a.scope: Deactivated successfully. Oct 5 05:57:00 localhost podman[316516]: 2025-10-05 09:57:00.938586056 +0000 UTC m=+0.168821990 container died 0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_bell, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, version=7, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:01 localhost podman[316536]: 2025-10-05 09:57:01.041645169 +0000 UTC m=+0.090188477 container remove 0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_bell, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True) Oct 5 05:57:01 localhost systemd[1]: libpod-conmon-0d0a5bbe6ba3e2ab97d9c1684d3ac1351deb4c00da2dd3533e974d6ebac1102a.scope: Deactivated successfully. Oct 5 05:57:01 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:01 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:01 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:01 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:01 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:01 localhost systemd[1]: var-lib-containers-storage-overlay-892f6cdc0d716ef81713ac988eef6dcd96269f82c6f0c67479c0395ffcf397a7-merged.mount: Deactivated successfully. Oct 5 05:57:01 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.34554 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005471148.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:57:01 localhost ceph-mgr[301561]: [cephadm INFO root] Removed host np0005471148.localdomain Oct 5 05:57:01 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removed host np0005471148.localdomain Oct 5 05:57:01 localhost podman[316608]: Oct 5 05:57:01 localhost podman[316608]: 2025-10-05 09:57:01.707796355 +0000 UTC m=+0.071766758 container create be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_wilson, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container) Oct 5 05:57:01 localhost systemd[1]: Started libpod-conmon-be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972.scope. Oct 5 05:57:01 localhost systemd[1]: Started libcrun container. Oct 5 05:57:01 localhost podman[316608]: 2025-10-05 09:57:01.674321001 +0000 UTC m=+0.038291434 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:01 localhost podman[316608]: 2025-10-05 09:57:01.774156557 +0000 UTC m=+0.138126970 container init be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_wilson, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12) Oct 5 05:57:01 localhost podman[316608]: 2025-10-05 09:57:01.792604375 +0000 UTC m=+0.156574788 container start be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_wilson, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:01 localhost podman[316608]: 2025-10-05 09:57:01.793213692 +0000 UTC m=+0.157184145 container attach be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_wilson, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:01 localhost nostalgic_wilson[316624]: 167 167 Oct 5 05:57:01 localhost systemd[1]: libpod-be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972.scope: Deactivated successfully. Oct 5 05:57:01 localhost podman[316608]: 2025-10-05 09:57:01.796255584 +0000 UTC m=+0.160226017 container died be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_wilson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:57:01 localhost podman[316629]: 2025-10-05 09:57:01.896005707 +0000 UTC m=+0.090413703 container remove be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_wilson, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:01 localhost systemd[1]: libpod-conmon-be7c4a08f2069d76806a30f885764d78f521a3fab55c3e1d5d475c4e411e1972.scope: Deactivated successfully. Oct 5 05:57:02 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:02 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:02 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:02 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:02 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"} : dispatch Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"}]': finished Oct 5 05:57:02 localhost ceph-mon[308154]: Removed host np0005471148.localdomain Oct 5 05:57:02 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost systemd[1]: tmp-crun.uI4O9k.mount: Deactivated successfully. Oct 5 05:57:02 localhost systemd[1]: var-lib-containers-storage-overlay-417eebe3d5e07d2aff3c1552c04b670ef62b1fa5caf8ecfab33a89b62bc362fe-merged.mount: Deactivated successfully. Oct 5 05:57:02 localhost nova_compute[297021]: 2025-10-05 09:57:02.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:02 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:02 localhost nova_compute[297021]: 2025-10-05 09:57:02.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:02 localhost podman[316705]: Oct 5 05:57:02 localhost podman[316705]: 2025-10-05 09:57:02.745687929 +0000 UTC m=+0.077497324 container create 02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_shaw, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, name=rhceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, architecture=x86_64, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True) Oct 5 05:57:02 localhost systemd[1]: Started libpod-conmon-02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379.scope. Oct 5 05:57:02 localhost systemd[1]: Started libcrun container. Oct 5 05:57:02 localhost podman[316705]: 2025-10-05 09:57:02.714807435 +0000 UTC m=+0.046616860 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:02 localhost podman[316705]: 2025-10-05 09:57:02.816297805 +0000 UTC m=+0.148107200 container init 02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_shaw, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:02 localhost podman[316705]: 2025-10-05 09:57:02.825587276 +0000 UTC m=+0.157396691 container start 02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_shaw, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:02 localhost podman[316705]: 2025-10-05 09:57:02.825899314 +0000 UTC m=+0.157708759 container attach 02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_shaw, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, name=rhceph, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:57:02 localhost adoring_shaw[316720]: 167 167 Oct 5 05:57:02 localhost systemd[1]: libpod-02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379.scope: Deactivated successfully. Oct 5 05:57:02 localhost podman[316705]: 2025-10-05 09:57:02.828776112 +0000 UTC m=+0.160585517 container died 02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_shaw, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:02 localhost podman[316725]: 2025-10-05 09:57:02.936191423 +0000 UTC m=+0.094916934 container remove 02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_shaw, distribution-scope=public, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, name=rhceph, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph) Oct 5 05:57:02 localhost systemd[1]: libpod-conmon-02ad387764ce4b573f19dbb805b8fae9646febc5adcb9826472474005cc67379.scope: Deactivated successfully. Oct 5 05:57:03 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:03 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:03 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:03 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:03 localhost systemd[1]: var-lib-containers-storage-overlay-2422926d1ddaf4d71c5a4596e92c232181169161efc2fb2d35862b947d5d958c-merged.mount: Deactivated successfully. Oct 5 05:57:03 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:03 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:03 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:03 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:03 localhost nova_compute[297021]: 2025-10-05 09:57:03.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:03 localhost nova_compute[297021]: 2025-10-05 09:57:03.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:57:03 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:57:03 localhost podman[316802]: Oct 5 05:57:03 localhost podman[316802]: 2025-10-05 09:57:03.846116591 +0000 UTC m=+0.080794472 container create b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_agnesi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public) Oct 5 05:57:03 localhost systemd[1]: Started libpod-conmon-b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d.scope. Oct 5 05:57:03 localhost systemd[1]: Started libcrun container. Oct 5 05:57:03 localhost podman[316802]: 2025-10-05 09:57:03.81424 +0000 UTC m=+0.048917901 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:03 localhost podman[316802]: 2025-10-05 09:57:03.917128098 +0000 UTC m=+0.151805979 container init b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_agnesi, distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, version=7, GIT_CLEAN=True, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True) Oct 5 05:57:03 localhost podman[316802]: 2025-10-05 09:57:03.928584397 +0000 UTC m=+0.163262288 container start b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_agnesi, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, version=7, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:57:03 localhost podman[316802]: 2025-10-05 09:57:03.928896316 +0000 UTC m=+0.163574207 container attach b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_agnesi, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main) Oct 5 05:57:03 localhost jovial_agnesi[316815]: 167 167 Oct 5 05:57:03 localhost systemd[1]: libpod-b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d.scope: Deactivated successfully. Oct 5 05:57:03 localhost podman[316802]: 2025-10-05 09:57:03.932494803 +0000 UTC m=+0.167172734 container died b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_agnesi, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:57:04 localhost podman[316820]: 2025-10-05 09:57:04.03125991 +0000 UTC m=+0.085095679 container remove b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_agnesi, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:04 localhost systemd[1]: libpod-conmon-b2ecc2d9d817e91dc9649b0eb0240f7eda00ee663ab43a7bd09839afb06d106d.scope: Deactivated successfully. Oct 5 05:57:04 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:04 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:04 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:04 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:04 localhost systemd[1]: tmp-crun.zZS5mF.mount: Deactivated successfully. Oct 5 05:57:04 localhost systemd[1]: var-lib-containers-storage-overlay-10951a9a18e1c4ae26494bf939c4469e6d389792c3105917e199ef0d91b57121-merged.mount: Deactivated successfully. Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:04 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:04 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:04 localhost podman[316888]: Oct 5 05:57:04 localhost podman[316888]: 2025-10-05 09:57:04.74163623 +0000 UTC m=+0.084151443 container create f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_mclean, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Oct 5 05:57:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:57:04 localhost systemd[1]: Started libpod-conmon-f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7.scope. Oct 5 05:57:04 localhost systemd[1]: Started libcrun container. Oct 5 05:57:04 localhost podman[316888]: 2025-10-05 09:57:04.707823757 +0000 UTC m=+0.050339040 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:04 localhost podman[316888]: 2025-10-05 09:57:04.814010664 +0000 UTC m=+0.156525927 container init f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_mclean, vcs-type=git, io.openshift.expose-services=, release=553, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:04 localhost podman[316888]: 2025-10-05 09:57:04.826475241 +0000 UTC m=+0.168990464 container start f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_mclean, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:57:04 localhost podman[316888]: 2025-10-05 09:57:04.826724438 +0000 UTC m=+0.169239721 container attach f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_mclean, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph) Oct 5 05:57:04 localhost inspiring_mclean[316904]: 167 167 Oct 5 05:57:04 localhost systemd[1]: libpod-f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7.scope: Deactivated successfully. Oct 5 05:57:04 localhost podman[316888]: 2025-10-05 09:57:04.830912361 +0000 UTC m=+0.173427604 container died f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_mclean, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, ceph=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Oct 5 05:57:04 localhost podman[316903]: 2025-10-05 09:57:04.907108658 +0000 UTC m=+0.118373347 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:57:04 localhost podman[316918]: 2025-10-05 09:57:04.930361576 +0000 UTC m=+0.091349897 container remove f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_mclean, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:04 localhost systemd[1]: libpod-conmon-f717712fdd24338f6dc8e5cb83b1d5e78e6a557a30d2431810a9e47a340435e7.scope: Deactivated successfully. Oct 5 05:57:04 localhost podman[316903]: 2025-10-05 09:57:04.988892547 +0000 UTC m=+0.200157216 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:57:05 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:57:05 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:57:05 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:57:05 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:57:05 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:57:05 localhost systemd[1]: tmp-crun.SDUic9.mount: Deactivated successfully. Oct 5 05:57:05 localhost systemd[1]: var-lib-containers-storage-overlay-4b6218cf0a716657a1ca06ad24e11d76d65bcf411b53de8ace014a7f87e30b51-merged.mount: Deactivated successfully. Oct 5 05:57:05 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:05 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:05 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.441 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.441 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.442 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.442 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.442 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:57:05 localhost podman[317000]: Oct 5 05:57:05 localhost podman[317000]: 2025-10-05 09:57:05.672859674 +0000 UTC m=+0.090565357 container create 7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_leakey, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Oct 5 05:57:05 localhost systemd[1]: Started libpod-conmon-7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3.scope. Oct 5 05:57:05 localhost systemd[1]: Started libcrun container. Oct 5 05:57:05 localhost podman[317000]: 2025-10-05 09:57:05.634321383 +0000 UTC m=+0.052027066 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:05 localhost podman[317000]: 2025-10-05 09:57:05.744366764 +0000 UTC m=+0.162072447 container init 7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_leakey, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64) Oct 5 05:57:05 localhost podman[317000]: 2025-10-05 09:57:05.752270018 +0000 UTC m=+0.169975681 container start 7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_leakey, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, version=7, com.redhat.component=rhceph-container, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux ) Oct 5 05:57:05 localhost podman[317000]: 2025-10-05 09:57:05.752509894 +0000 UTC m=+0.170215577 container attach 7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_leakey, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7) Oct 5 05:57:05 localhost nostalgic_leakey[317034]: 167 167 Oct 5 05:57:05 localhost systemd[1]: libpod-7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3.scope: Deactivated successfully. Oct 5 05:57:05 localhost podman[317000]: 2025-10-05 09:57:05.760904531 +0000 UTC m=+0.178610194 container died 7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_leakey, vcs-type=git, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_CLEAN=True, release=553, CEPH_POINT_RELEASE=) Oct 5 05:57:05 localhost podman[317039]: 2025-10-05 09:57:05.859891083 +0000 UTC m=+0.081281025 container remove 7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_leakey, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Oct 5 05:57:05 localhost systemd[1]: libpod-conmon-7a87811223cf32e0087c3a89ed90e18629ebb9e010f55e6c73e1d741e43124d3.scope: Deactivated successfully. Oct 5 05:57:05 localhost ceph-mon[308154]: mon.np0005471150@1(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:57:05 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/500633903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:57:05 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:05 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:05 localhost nova_compute[297021]: 2025-10-05 09:57:05.960 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:57:05 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:05 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.039 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.040 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:57:06 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:06 localhost systemd[1]: var-lib-containers-storage-overlay-edd4a85169d155de435788ea0733228b2c6ddaad60ac50f6c130b354ca18d60b-merged.mount: Deactivated successfully. Oct 5 05:57:06 localhost ceph-mon[308154]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:57:06 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:57:06 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.256 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.258 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11680MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.258 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.259 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:06 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.491 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.491 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.492 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.729 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:57:06 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev cb71ce10-c967-4c6a-920e-a6b39b2cab9f (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:06 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev cb71ce10-c967-4c6a-920e-a6b39b2cab9f (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:06 localhost ceph-mgr[301561]: [progress INFO root] Completed event cb71ce10-c967-4c6a-920e-a6b39b2cab9f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.974 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:57:06 localhost nova_compute[297021]: 2025-10-05 09:57:06.975 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.003 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.047 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.091 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:07 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:07 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:07 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:07 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:07 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:07 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:07 localhost ceph-mon[308154]: mon.np0005471150@1(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:57:07 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3576432583' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.556 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.561 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.576 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.577 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.577 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.578 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.578 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.593 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:57:07 localhost nova_compute[297021]: 2025-10-05 09:57:07.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:08 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.589 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.590 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.591 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.591 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.618 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Triggering sync for uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.619 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.619 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:08 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:57:08 localhost nova_compute[297021]: 2025-10-05 09:57:08.664 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:09 localhost nova_compute[297021]: 2025-10-05 09:57:09.447 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:09 localhost nova_compute[297021]: 2025-10-05 09:57:09.476 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:09 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:10 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.34566 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:57:10 localhost ceph-mgr[301561]: [cephadm INFO root] Saving service mon spec with placement label:mon Oct 5 05:57:10 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Oct 5 05:57:10 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 9d10d953-026c-468f-a83c-be9ff9c3c949 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:10 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 9d10d953-026c-468f-a83c-be9ff9c3c949 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:10 localhost ceph-mgr[301561]: [progress INFO root] Completed event 9d10d953-026c-468f-a83c-be9ff9c3c949 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:57:10 localhost podman[317115]: 2025-10-05 09:57:10.360864582 +0000 UTC m=+0.083105074 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:57:10 localhost podman[317115]: 2025-10-05 09:57:10.401080789 +0000 UTC m=+0.123321341 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:57:10 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:57:10 localhost podman[317117]: 2025-10-05 09:57:10.414939262 +0000 UTC m=+0.131867681 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 05:57:10 localhost nova_compute[297021]: 2025-10-05 09:57:10.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:10 localhost podman[317117]: 2025-10-05 09:57:10.427583404 +0000 UTC m=+0.144511803 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:57:10 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:57:10 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:10 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:10 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:10 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:11 localhost ceph-mon[308154]: mon.np0005471150@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.433 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.433 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.434 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.497666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231497714, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2117, "num_deletes": 251, "total_data_size": 4290246, "memory_usage": 4351176, "flush_reason": "Manual Compaction"} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231512441, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2315531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17203, "largest_seqno": 19315, "table_properties": {"data_size": 2307172, "index_size": 4672, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 23474, "raw_average_key_size": 22, "raw_value_size": 2288240, "raw_average_value_size": 2232, "num_data_blocks": 206, "num_entries": 1025, "num_filter_entries": 1025, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658182, "oldest_key_time": 1759658182, "file_creation_time": 1759658231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14830 microseconds, and 6537 cpu microseconds. Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.512492) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2315531 bytes OK Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.512518) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.515869) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.515890) EVENT_LOG_v1 {"time_micros": 1759658231515882, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.515914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 4279784, prev total WAL file size 4280108, number of live WAL files 2. Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.516965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373535' seq:72057594037927935, type:22 .. '6D6772737461740034303036' seq:0, type:0; will stop at (end) Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(2261KB)], [27(17MB)] Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231517040, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20160757, "oldest_snapshot_seqno": -1} Oct 5 05:57:11 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.34569 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005471152", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11692 keys, 18066597 bytes, temperature: kUnknown Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231631682, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 18066597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17999956, "index_size": 36332, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 312371, "raw_average_key_size": 26, "raw_value_size": 17800894, "raw_average_value_size": 1522, "num_data_blocks": 1383, "num_entries": 11692, "num_filter_entries": 11692, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.632276) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 18066597 bytes Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.4 rd, 157.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 17.0 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(16.5) write-amplify(7.8) OK, records in: 12216, records dropped: 524 output_compression: NoCompression Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634097) EVENT_LOG_v1 {"time_micros": 1759658231634083, "job": 14, "event": "compaction_finished", "compaction_time_micros": 114972, "compaction_time_cpu_micros": 50137, "output_level": 6, "num_output_files": 1, "total_output_size": 18066597, "num_input_records": 12216, "num_output_records": 11692, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231635159, "job": 14, "event": "table_file_deletion", "file_number": 29} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231638304, "job": 14, "event": "table_file_deletion", "file_number": 27} Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.516795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.638567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.638576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.638579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.638582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:11.638585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[308154]: Saving service mon spec with placement label:mon Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.948 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.949 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.949 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:57:11 localhost nova_compute[297021]: 2025-10-05 09:57:11.949 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:57:12 localhost nova_compute[297021]: 2025-10-05 09:57:12.272 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:57:12 localhost nova_compute[297021]: 2025-10-05 09:57:12.291 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:57:12 localhost nova_compute[297021]: 2025-10-05 09:57:12.291 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:57:12 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:12 localhost nova_compute[297021]: 2025-10-05 09:57:12.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:12 localhost nova_compute[297021]: 2025-10-05 09:57:12.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:12 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44473 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005471152"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:57:12 localhost ceph-mgr[301561]: [cephadm INFO root] Remove daemons mon.np0005471152 Oct 5 05:57:12 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005471152 Oct 5 05:57:12 localhost ceph-mgr[301561]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005471152: new quorum should be ['np0005471150', 'np0005471151'] (from ['np0005471150', 'np0005471151']) Oct 5 05:57:12 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005471152: new quorum should be ['np0005471150', 'np0005471151'] (from ['np0005471150', 'np0005471151']) Oct 5 05:57:12 localhost ceph-mgr[301561]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005471152 from monmap... Oct 5 05:57:12 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing monitor np0005471152 from monmap... Oct 5 05:57:12 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005471152 from np0005471152.localdomain -- ports [] Oct 5 05:57:12 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005471152 from np0005471152.localdomain -- ports [] Oct 5 05:57:12 localhost ceph-mon[308154]: mon.np0005471150@1(peon) e14 my rank is now 0 (was 1) Oct 5 05:57:12 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:57:12 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:57:12 localhost ceph-mgr[301561]: client.27552 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:57:12 localhost ceph-mgr[301561]: client.34501 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:57:12 localhost ceph-mon[308154]: mon.np0005471150@0(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:57:12 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:57:12 localhost ceph-mon[308154]: mon.np0005471150@0(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:57:12 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:57:12 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:57:12 localhost ceph-mon[308154]: paxos.0).electionLogic(54) init, last seen epoch 54 Oct 5 05:57:12 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:12 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 is new leader, mons np0005471150,np0005471151 in quorum (ranks 0,1) Oct 5 05:57:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:12 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : monmap epoch 14 Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : last_changed 2025-10-05T09:57:12.933623+0000 Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : created 2025-10-05T07:42:01.637504+0000 Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005471150 Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005471151 Oct 5 05:57:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005471152.pozuqw=up:active} 2 up:standby Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e32: np0005471150.zwqxye(active, since 54s), standbys: np0005471151.jecxod, np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:57:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:13 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 5 05:57:13 localhost ceph-mon[308154]: Remove daemons mon.np0005471152 Oct 5 05:57:13 localhost ceph-mon[308154]: Safe to remove mon.np0005471152: new quorum should be ['np0005471150', 'np0005471151'] (from ['np0005471150', 'np0005471151']) Oct 5 05:57:13 localhost ceph-mon[308154]: Removing monitor np0005471152 from monmap... Oct 5 05:57:13 localhost ceph-mon[308154]: Removing daemon mon.np0005471152 from np0005471152.localdomain -- ports [] Oct 5 05:57:13 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:57:13 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:57:13 localhost ceph-mon[308154]: mon.np0005471150 is new leader, mons np0005471150,np0005471151 in quorum (ranks 0,1) Oct 5 05:57:13 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:13 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:57:13 localhost nova_compute[297021]: 2025-10-05 09:57:13.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:13 localhost nova_compute[297021]: 2025-10-05 09:57:13.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:57:13 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:57:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:57:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:13 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:13 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:14 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:14 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:14 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:14 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:14 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev 62430b82-d845-4c6a-8904-0ba5689cfc25 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:14 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev 62430b82-d845-4c6a-8904-0ba5689cfc25 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:14 localhost ceph-mgr[301561]: [progress INFO root] Completed event 62430b82-d845-4c6a-8904-0ba5689cfc25 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:57:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:57:14 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:57:15 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:15 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:15 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:15 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:15 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:15 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:15 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:15 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:15 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:15 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:15 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:15 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:15 localhost podman[317543]: Oct 5 05:57:15 localhost podman[317543]: 2025-10-05 09:57:15.7173042 +0000 UTC m=+0.079740794 container create 42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_euclid, version=7, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:57:15 localhost systemd[1]: Started libpod-conmon-42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584.scope. Oct 5 05:57:15 localhost systemd[1]: Started libcrun container. Oct 5 05:57:15 localhost podman[317543]: 2025-10-05 09:57:15.682889081 +0000 UTC m=+0.045325695 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:15 localhost podman[317543]: 2025-10-05 09:57:15.79064794 +0000 UTC m=+0.153084534 container init 42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_euclid, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, distribution-scope=public, release=553, ceph=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:57:15 localhost podman[317543]: 2025-10-05 09:57:15.802313036 +0000 UTC m=+0.164749650 container start 42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_euclid, release=553, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:15 localhost podman[317543]: 2025-10-05 09:57:15.802734177 +0000 UTC m=+0.165170811 container attach 42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_euclid, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64) Oct 5 05:57:15 localhost stoic_euclid[317559]: 167 167 Oct 5 05:57:15 localhost systemd[1]: libpod-42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584.scope: Deactivated successfully. Oct 5 05:57:15 localhost podman[317543]: 2025-10-05 09:57:15.806903989 +0000 UTC m=+0.169340573 container died 42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_euclid, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7) Oct 5 05:57:15 localhost podman[317564]: 2025-10-05 09:57:15.91134326 +0000 UTC m=+0.093157797 container remove 42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_euclid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public) Oct 5 05:57:15 localhost systemd[1]: libpod-conmon-42b24457b4fcef4524b2157988f6b2f73e5d8dfd8d45d9823082b4c336b15584.scope: Deactivated successfully. Oct 5 05:57:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:16 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:16 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:16 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:16 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Oct 5 05:57:16 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:16 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:16 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:16 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:16 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:16 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:16 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:16 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:16 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:16 localhost podman[317634]: Oct 5 05:57:16 localhost systemd[1]: var-lib-containers-storage-overlay-371c527e3d5f48fe2cfb09f5694ab0e7fdaa9e253d8cc3c3ed90bb9249dee5db-merged.mount: Deactivated successfully. Oct 5 05:57:16 localhost podman[317634]: 2025-10-05 09:57:16.732154642 +0000 UTC m=+0.088377328 container create 9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_austin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:57:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:57:16 localhost systemd[1]: Started libpod-conmon-9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb.scope. Oct 5 05:57:16 localhost podman[317634]: 2025-10-05 09:57:16.693593691 +0000 UTC m=+0.049816417 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:16 localhost systemd[1]: Started libcrun container. Oct 5 05:57:16 localhost podman[317634]: 2025-10-05 09:57:16.822678346 +0000 UTC m=+0.178901022 container init 9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_austin, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, description=Red Hat Ceph Storage 7) Oct 5 05:57:16 localhost affectionate_austin[317655]: 167 167 Oct 5 05:57:16 localhost systemd[1]: libpod-9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb.scope: Deactivated successfully. Oct 5 05:57:16 localhost podman[317648]: 2025-10-05 09:57:16.875820161 +0000 UTC m=+0.116073706 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:57:16 localhost podman[317634]: 2025-10-05 09:57:16.884832074 +0000 UTC m=+0.241054750 container start 9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_austin, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, name=rhceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64) Oct 5 05:57:16 localhost podman[317634]: 2025-10-05 09:57:16.885469021 +0000 UTC m=+0.241691747 container attach 9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_austin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55) Oct 5 05:57:16 localhost podman[317634]: 2025-10-05 09:57:16.88877297 +0000 UTC m=+0.244995696 container died 9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_austin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=553, ceph=True, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph) Oct 5 05:57:16 localhost podman[317648]: 2025-10-05 09:57:16.91137511 +0000 UTC m=+0.151628655 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:57:16 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:57:16 localhost podman[317666]: 2025-10-05 09:57:16.961782621 +0000 UTC m=+0.108217412 container remove 9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_austin, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=553, version=7, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, ceph=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:57:16 localhost systemd[1]: libpod-conmon-9012b10b0a2a9b5f01726289f339845c43f48a89f19a586179c598a619ce9cfb.scope: Deactivated successfully. Oct 5 05:57:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:17 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:17 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:17 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:17 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 5 05:57:17 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:17 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:17 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:17 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:17 localhost nova_compute[297021]: 2025-10-05 09:57:17.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:17 localhost nova_compute[297021]: 2025-10-05 09:57:17.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:17 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:17 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:17 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:17 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:17 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:17 localhost systemd[1]: var-lib-containers-storage-overlay-84f55f917ba9313e3977838cfa151dcfe014051c68fb48f8749d183d0ef2a0a6-merged.mount: Deactivated successfully. Oct 5 05:57:17 localhost podman[317748]: Oct 5 05:57:17 localhost podman[317748]: 2025-10-05 09:57:17.878439362 +0000 UTC m=+0.087105603 container create cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_pascal, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:17 localhost systemd[1]: Started libpod-conmon-cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0.scope. Oct 5 05:57:17 localhost podman[317748]: 2025-10-05 09:57:17.842941233 +0000 UTC m=+0.051607484 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:17 localhost systemd[1]: Started libcrun container. Oct 5 05:57:17 localhost podman[317748]: 2025-10-05 09:57:17.96058661 +0000 UTC m=+0.169252871 container init cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_pascal, name=rhceph, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:57:17 localhost podman[317748]: 2025-10-05 09:57:17.973635663 +0000 UTC m=+0.182301914 container start cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_pascal, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, release=553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.expose-services=, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 5 05:57:17 localhost podman[317748]: 2025-10-05 09:57:17.973987392 +0000 UTC m=+0.182653643 container attach cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_pascal, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True) Oct 5 05:57:17 localhost affectionate_pascal[317763]: 167 167 Oct 5 05:57:17 localhost systemd[1]: libpod-cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0.scope: Deactivated successfully. Oct 5 05:57:17 localhost podman[317748]: 2025-10-05 09:57:17.980140198 +0000 UTC m=+0.188806509 container died cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_pascal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:57:18 localhost podman[317768]: 2025-10-05 09:57:18.084003102 +0000 UTC m=+0.092303933 container remove cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_pascal, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, version=7, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, maintainer=Guillaume Abrioux ) Oct 5 05:57:18 localhost systemd[1]: libpod-conmon-cc76de7c260fcec9d31a312a76ae6ca05a47de4c77976f5934b68b2f607a61f0.scope: Deactivated successfully. Oct 5 05:57:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:18 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:18 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:57:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:18 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:18 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:18 localhost ceph-mgr[301561]: [balancer INFO root] Optimize plan auto_2025-10-05_09:57:18 Oct 5 05:57:18 localhost ceph-mgr[301561]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 05:57:18 localhost ceph-mgr[301561]: [balancer INFO root] do_upmap Oct 5 05:57:18 localhost ceph-mgr[301561]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'manila_metadata', 'backups', 'manila_data', 'volumes'] Oct 5 05:57:18 localhost ceph-mgr[301561]: [balancer INFO root] prepared 0/10 changes Oct 5 05:57:18 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 05:57:18 localhost ceph-mgr[301561]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:57:18 localhost ceph-mgr[301561]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:57:18 localhost ceph-mgr[301561]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Oct 5 05:57:18 localhost ceph-mgr[301561]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:57:18 localhost ceph-mgr[301561]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:57:18 localhost ceph-mgr[301561]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:57:18 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:57:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:57:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:18 localhost systemd[1]: var-lib-containers-storage-overlay-3bd9443ff25d19a579f6ef47905a5da20461b907dfba83b21f5dabc01ef8c74d-merged.mount: Deactivated successfully. Oct 5 05:57:18 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:18 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:18 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:18 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:18 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:18 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:18 localhost podman[317844]: Oct 5 05:57:19 localhost podman[317844]: 2025-10-05 09:57:19.000163299 +0000 UTC m=+0.082593270 container create c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_satoshi, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph) Oct 5 05:57:19 localhost systemd[1]: Started libpod-conmon-c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7.scope. Oct 5 05:57:19 localhost systemd[1]: Started libcrun container. Oct 5 05:57:19 localhost podman[317844]: 2025-10-05 09:57:19.065574255 +0000 UTC m=+0.148004196 container init c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_satoshi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64) Oct 5 05:57:19 localhost podman[317844]: 2025-10-05 09:57:18.970980192 +0000 UTC m=+0.053410163 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:19 localhost podman[317844]: 2025-10-05 09:57:19.075281747 +0000 UTC m=+0.157711698 container start c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_satoshi, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Oct 5 05:57:19 localhost podman[317844]: 2025-10-05 09:57:19.07574149 +0000 UTC m=+0.158171491 container attach c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_satoshi, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , RELEASE=main, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:19 localhost tender_satoshi[317859]: 167 167 Oct 5 05:57:19 localhost systemd[1]: libpod-c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7.scope: Deactivated successfully. Oct 5 05:57:19 localhost podman[317844]: 2025-10-05 09:57:19.079334816 +0000 UTC m=+0.161764807 container died c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_satoshi, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553) Oct 5 05:57:19 localhost podman[317864]: 2025-10-05 09:57:19.182690667 +0000 UTC m=+0.092363225 container remove c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_satoshi, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Oct 5 05:57:19 localhost systemd[1]: libpod-conmon-c75718652b131ca0af8b4602cbd5c242a7cd6c0c22419f5119c361a413753df7.scope: Deactivated successfully. Oct 5 05:57:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:19 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:19 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:57:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:57:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:57:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:19 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:19 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.292702) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658239292744, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 555, "num_deletes": 251, "total_data_size": 496996, "memory_usage": 507152, "flush_reason": "Manual Compaction"} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658239297808, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 432548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19320, "largest_seqno": 19870, "table_properties": {"data_size": 429408, "index_size": 1059, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8717, "raw_average_key_size": 21, "raw_value_size": 422612, "raw_average_value_size": 1040, "num_data_blocks": 43, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658231, "oldest_key_time": 1759658231, "file_creation_time": 1759658239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5153 microseconds, and 2453 cpu microseconds. Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.297854) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 432548 bytes OK Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.297880) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.299644) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.299664) EVENT_LOG_v1 {"time_micros": 1759658239299656, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.299682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 493688, prev total WAL file size 493688, number of live WAL files 2. Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.300335) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(422KB)], [30(17MB)] Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658239300663, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 18499145, "oldest_snapshot_seqno": -1} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11569 keys, 15321711 bytes, temperature: kUnknown Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658239374171, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 15321711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15258020, "index_size": 33702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28933, "raw_key_size": 310549, "raw_average_key_size": 26, "raw_value_size": 15063132, "raw_average_value_size": 1302, "num_data_blocks": 1270, "num_entries": 11569, "num_filter_entries": 11569, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.375173) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 15321711 bytes Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.377382) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.2 rd, 208.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 17.2 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(78.2) write-amplify(35.4) OK, records in: 12098, records dropped: 529 output_compression: NoCompression Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.377487) EVENT_LOG_v1 {"time_micros": 1759658239377441, "job": 16, "event": "compaction_finished", "compaction_time_micros": 73650, "compaction_time_cpu_micros": 45945, "output_level": 6, "num_output_files": 1, "total_output_size": 15321711, "num_input_records": 12098, "num_output_records": 11569, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658239378231, "job": 16, "event": "table_file_deletion", "file_number": 32} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658239383087, "job": 16, "event": "table_file_deletion", "file_number": 30} Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.300235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.383453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.383465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.383688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.383719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:19 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:19.383725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:19 localhost systemd[1]: var-lib-containers-storage-overlay-ad2c567d22f980896e5f30d3dc20fc887da25efbd5eaad115003d2c58bc54985-merged.mount: Deactivated successfully. Oct 5 05:57:19 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:19 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:19 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:19 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:19 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:19 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:19 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:57:19 localhost podman[317932]: 2025-10-05 09:57:19.932282647 +0000 UTC m=+0.101231014 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller) Oct 5 05:57:19 localhost podman[317932]: 2025-10-05 09:57:19.983025767 +0000 UTC m=+0.151974144 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller) Oct 5 05:57:19 localhost podman[317940]: Oct 5 05:57:19 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:57:20 localhost podman[317940]: 2025-10-05 09:57:20.002254446 +0000 UTC m=+0.146154467 container create 75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_noether, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.expose-services=) Oct 5 05:57:20 localhost systemd[1]: Started libpod-conmon-75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c.scope. Oct 5 05:57:20 localhost podman[317940]: 2025-10-05 09:57:19.95943388 +0000 UTC m=+0.103333951 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:20 localhost systemd[1]: Started libcrun container. Oct 5 05:57:20 localhost podman[317940]: 2025-10-05 09:57:20.087920999 +0000 UTC m=+0.231821030 container init 75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_noether, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=553, version=7, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:57:20 localhost podman[317940]: 2025-10-05 09:57:20.098980777 +0000 UTC m=+0.242880838 container start 75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_noether, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7) Oct 5 05:57:20 localhost podman[317940]: 2025-10-05 09:57:20.099270245 +0000 UTC m=+0.243170316 container attach 75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_noether, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Oct 5 05:57:20 localhost beautiful_noether[317976]: 167 167 Oct 5 05:57:20 localhost systemd[1]: libpod-75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c.scope: Deactivated successfully. Oct 5 05:57:20 localhost podman[317940]: 2025-10-05 09:57:20.103684505 +0000 UTC m=+0.247584596 container died 75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_noether, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Oct 5 05:57:20 localhost podman[317981]: 2025-10-05 09:57:20.196307466 +0000 UTC m=+0.081745528 container remove 75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_noether, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main) Oct 5 05:57:20 localhost systemd[1]: libpod-conmon-75e1bdfa962cb17d59546a24a6a06e280b0fdb69af8d66d7d5c89eeff396e42c.scope: Deactivated successfully. Oct 5 05:57:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:20 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:20 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:20 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:20 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:20 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:20 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:20 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:20 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:57:20.458 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:57:20.460 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:57:20.461 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:20 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:20 localhost systemd[1]: var-lib-containers-storage-overlay-c3735b98ac3e0113945794247371affd9d58c363c9c3d79d1362cdbfeeef6278-merged.mount: Deactivated successfully. Oct 5 05:57:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:21 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:21 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 5 05:57:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:21 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:21 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:21 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:21 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:21 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:21 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:21 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:21 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:21 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:21 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:21 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:21 localhost podman[248506]: time="2025-10-05T09:57:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:57:21 localhost podman[248506]: @ - - [05/Oct/2025:09:57:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:57:21 localhost podman[248506]: @ - - [05/Oct/2025:09:57:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18847 "" "Go-http-client/1.1" Oct 5 05:57:22 localhost openstack_network_exporter[250601]: ERROR 09:57:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:22 localhost openstack_network_exporter[250601]: ERROR 09:57:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:22 localhost openstack_network_exporter[250601]: ERROR 09:57:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:57:22 localhost openstack_network_exporter[250601]: ERROR 09:57:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:57:22 localhost openstack_network_exporter[250601]: Oct 5 05:57:22 localhost openstack_network_exporter[250601]: ERROR 09:57:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:57:22 localhost openstack_network_exporter[250601]: Oct 5 05:57:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:22 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Oct 5 05:57:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Oct 5 05:57:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Oct 5 05:57:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:57:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:22 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:22 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:57:22 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:57:22 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:22 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:22 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:57:22 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:22 localhost nova_compute[297021]: 2025-10-05 09:57:22.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:57:22 localhost nova_compute[297021]: 2025-10-05 09:57:22.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:22 localhost systemd[1]: tmp-crun.5FdjoV.mount: Deactivated successfully. Oct 5 05:57:22 localhost podman[317998]: 2025-10-05 09:57:22.68604277 +0000 UTC m=+0.092610322 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Oct 5 05:57:22 localhost podman[317998]: 2025-10-05 09:57:22.697405557 +0000 UTC m=+0.103973039 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 05:57:22 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:57:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:57:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:57:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:57:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:23 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:57:23 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:57:23 localhost ceph-mon[308154]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:57:23 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:57:23 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:23 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:23 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:57:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:57:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:57:24 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:57:24 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:57:24 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:24 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:24 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:24 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:57:24 localhost podman[318018]: 2025-10-05 09:57:24.687667174 +0000 UTC m=+0.091863761 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:57:24 localhost podman[318018]: 2025-10-05 09:57:24.731839137 +0000 UTC m=+0.136035684 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6) Oct 5 05:57:24 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:57:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:24 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:24 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:57:24 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:57:25 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:57:25 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:57:25 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:25 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:25 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:25 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:25 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:25 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 5 05:57:25 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 5 05:57:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Oct 5 05:57:25 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:57:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:25 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:57:25 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:57:26 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44484 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005471152.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:26 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:57:26 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:26 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:57:26 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:57:26 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:26 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:26 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:57:26 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:26 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:26 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:26 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 5 05:57:26 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:57:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:26 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:26 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:57:26 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:57:27 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:57:27 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:57:27 localhost ceph-mon[308154]: Deploying daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:57:27 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:27 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:27 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:57:27 localhost nova_compute[297021]: 2025-10-05 09:57:27.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:27 localhost nova_compute[297021]: 2025-10-05 09:57:27.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:28 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:57:28 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:57:28 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:28 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 5 05:57:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 5 05:57:28 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader).monmap v14 adding/updating np0005471152 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster Oct 5 05:57:29 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:29 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (2) No such file or directory Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:57:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:57:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:29 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:57:29 localhost ceph-mon[308154]: paxos.0).electionLogic(56) init, last seen epoch 56 Oct 5 05:57:29 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:29 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:30 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:30 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:30 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:30 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:30 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:57:30 localhost podman[318039]: 2025-10-05 09:57:30.668229594 +0000 UTC m=+0.078168672 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:57:30 localhost podman[318039]: 2025-10-05 09:57:30.682345805 +0000 UTC m=+0.092284803 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:57:30 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:57:31 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:31 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:31 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:31 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:32 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:32 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:32 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:32 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:32 localhost nova_compute[297021]: 2025-10-05 09:57:32.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:32 localhost nova_compute[297021]: 2025-10-05 09:57:32.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:33 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:33 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:33 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:34 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:34 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 is new leader, mons np0005471150,np0005471151 in quorum (ranks 0,1) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : monmap epoch 15 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : last_changed 2025-10-05T09:57:29.212307+0000 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : created 2025-10-05T07:42:01.637504+0000 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005471150 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005471151 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005471152 Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005471152.pozuqw=up:active} 2 up:standby Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e32: np0005471150.zwqxye(active, since 75s), standbys: np0005471151.jecxod, np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005471150,np0005471151 (MON_DOWN) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum np0005471150,np0005471151 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005471150,np0005471151 Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(cluster) log [WRN] : mon.np0005471152 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:34 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:57:34 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:34 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:34 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:57:34 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471150 is new leader, mons np0005471150,np0005471151 in quorum (ranks 0,1) Oct 5 05:57:34 localhost ceph-mon[308154]: Health check failed: 1/3 mons down, quorum np0005471150,np0005471151 (MON_DOWN) Oct 5 05:57:34 localhost ceph-mon[308154]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005471150,np0005471151 Oct 5 05:57:34 localhost ceph-mon[308154]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005471150,np0005471151 Oct 5 05:57:34 localhost ceph-mon[308154]: mon.np0005471152 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Oct 5 05:57:34 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:34 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:34 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:34 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:35 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:35 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:35 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:35 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:35 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:35 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:57:35 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:57:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:57:35 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:57:35 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:57:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:35 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:35 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:57:35 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:57:35 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:57:35 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:57:35 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:35 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:35 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:57:35 localhost podman[318061]: 2025-10-05 09:57:35.67724811 +0000 UTC m=+0.077404820 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:57:35 localhost podman[318061]: 2025-10-05 09:57:35.714750303 +0000 UTC m=+0.114907033 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:57:35 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:57:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:36 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.163723) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256163770, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 703, "num_deletes": 255, "total_data_size": 671196, "memory_usage": 685288, "flush_reason": "Manual Compaction"} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256171648, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 589574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19871, "largest_seqno": 20573, "table_properties": {"data_size": 585714, "index_size": 1587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10062, "raw_average_key_size": 20, "raw_value_size": 577375, "raw_average_value_size": 1190, "num_data_blocks": 66, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658239, "oldest_key_time": 1759658239, "file_creation_time": 1759658256, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 7991 microseconds, and 3654 cpu microseconds. Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:36 localhost ceph-mgr[301561]: mgr finish mon failed to return metadata for mon.np0005471152: (22) Invalid argument Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.171706) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 589574 bytes OK Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.171735) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.175607) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.175630) EVENT_LOG_v1 {"time_micros": 1759658256175623, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.175652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 667220, prev total WAL file size 667920, number of live WAL files 2. Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.176388) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373634' seq:72057594037927935, type:22 .. '6C6F676D0034303135' seq:0, type:0; will stop at (end) Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(575KB)], [33(14MB)] Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256176463, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 15911285, "oldest_snapshot_seqno": -1} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 11518 keys, 15781595 bytes, temperature: kUnknown Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256249072, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 15781595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15717116, "index_size": 34618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 310944, "raw_average_key_size": 26, "raw_value_size": 15521942, "raw_average_value_size": 1347, "num_data_blocks": 1306, "num_entries": 11518, "num_filter_entries": 11518, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658256, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.249441) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 15781595 bytes Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.251194) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.8 rd, 217.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.6 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(53.8) write-amplify(26.8) OK, records in: 12054, records dropped: 536 output_compression: NoCompression Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.251222) EVENT_LOG_v1 {"time_micros": 1759658256251210, "job": 18, "event": "compaction_finished", "compaction_time_micros": 72723, "compaction_time_cpu_micros": 44885, "output_level": 6, "num_output_files": 1, "total_output_size": 15781595, "num_input_records": 12054, "num_output_records": 11518, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256251464, "job": 18, "event": "table_file_deletion", "file_number": 35} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256253821, "job": 18, "event": "table_file_deletion", "file_number": 33} Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.176296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.253901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.253911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.253916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.253921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:57:36.253925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 calling monitor election Oct 5 05:57:36 localhost ceph-mon[308154]: paxos.0).electionLogic(58) init, last seen epoch 58 Oct 5 05:57:36 localhost ceph-mon[308154]: mon.np0005471150@0(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : mon.np0005471150 is new leader, mons np0005471150,np0005471151,np0005471152 in quorum (ranks 0,1,2) Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : monmap epoch 15 Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : last_changed 2025-10-05T09:57:29.212307+0000 Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : created 2025-10-05T07:42:01.637504+0000 Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005471150 Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005471151 Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005471152 Oct 5 05:57:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005471152.pozuqw=up:active} 2 up:standby Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e32: np0005471150.zwqxye(active, since 77s), standbys: np0005471151.jecxod, np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005471150,np0005471151) Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Cluster is now healthy Oct 5 05:57:36 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 5 05:57:36 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:37 localhost ceph-mgr[301561]: mgr.server handle_open ignoring open from mon.np0005471152 172.18.0.108:0/1594433497; not ready for session (expect reconnect) Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:57:37 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471152 calling monitor election Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471150 calling monitor election Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471151 calling monitor election Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471150 is new leader, mons np0005471150,np0005471151,np0005471152 in quorum (ranks 0,1,2) Oct 5 05:57:37 localhost ceph-mon[308154]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005471150,np0005471151) Oct 5 05:57:37 localhost ceph-mon[308154]: Cluster is now healthy Oct 5 05:57:37 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 05:57:37 localhost nova_compute[297021]: 2025-10-05 09:57:37.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:37 localhost nova_compute[297021]: 2025-10-05 09:57:37.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:38 localhost ceph-mgr[301561]: mgr.server handle_report got status from non-daemon mon.np0005471152 Oct 5 05:57:38 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:38.166+0000 7fbbd00a6640 -1 mgr.server handle_report got status from non-daemon mon.np0005471152 Oct 5 05:57:38 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:38 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:57:38 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:38 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:38 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:38 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:38 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:39 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:40 localhost ceph-mgr[301561]: [progress INFO root] update: starting ev e41fac8c-7238-4e34-8560-be4c2e64e2b3 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:40 localhost ceph-mgr[301561]: [progress INFO root] complete: finished ev e41fac8c-7238-4e34-8560-be4c2e64e2b3 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:57:40 localhost ceph-mgr[301561]: [progress INFO root] Completed event e41fac8c-7238-4e34-8560-be4c2e64e2b3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:57:40 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:40 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:40 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:40 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:57:40 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:57:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Oct 5 05:57:40 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1455310807' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Oct 5 05:57:40 localhost podman[318505]: 2025-10-05 09:57:40.605663349 +0000 UTC m=+0.091858611 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:57:40 localhost podman[318505]: 2025-10-05 09:57:40.620630683 +0000 UTC m=+0.106825945 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 05:57:40 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:57:40 localhost podman[318507]: 2025-10-05 09:57:40.714308322 +0000 UTC m=+0.196861556 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:57:40 localhost podman[318507]: 2025-10-05 09:57:40.725998449 +0000 UTC m=+0.208551673 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:57:40 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:41 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:41 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:41 localhost podman[318576]: Oct 5 05:57:41 localhost podman[318576]: 2025-10-05 09:57:41.096458851 +0000 UTC m=+0.081651935 container create 5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_payne, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:41 localhost systemd[1]: Started libpod-conmon-5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281.scope. Oct 5 05:57:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:41 localhost podman[318576]: 2025-10-05 09:57:41.064942759 +0000 UTC m=+0.050135883 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:41 localhost systemd[1]: Started libcrun container. Oct 5 05:57:41 localhost podman[318576]: 2025-10-05 09:57:41.190655334 +0000 UTC m=+0.175848418 container init 5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_payne, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55) Oct 5 05:57:41 localhost podman[318576]: 2025-10-05 09:57:41.203189183 +0000 UTC m=+0.188382267 container start 5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_payne, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 05:57:41 localhost podman[318576]: 2025-10-05 09:57:41.203537052 +0000 UTC m=+0.188730166 container attach 5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_payne, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 05:57:41 localhost strange_payne[318592]: 167 167 Oct 5 05:57:41 localhost systemd[1]: libpod-5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281.scope: Deactivated successfully. Oct 5 05:57:41 localhost podman[318576]: 2025-10-05 09:57:41.209478882 +0000 UTC m=+0.194672026 container died 5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_payne, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public) Oct 5 05:57:41 localhost podman[318597]: 2025-10-05 09:57:41.327608142 +0000 UTC m=+0.103264330 container remove 5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_payne, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, version=7) Oct 5 05:57:41 localhost systemd[1]: libpod-conmon-5cff4f4c9d54ee64442714de5dc81a24cd9f6f3e5fc39538ad30118edbd2a281.scope: Deactivated successfully. Oct 5 05:57:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:41 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Oct 5 05:57:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:41 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:41 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:41 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:41 localhost systemd[1]: var-lib-containers-storage-overlay-43264fc2927f32af08621a0892028adc4a4f8b43b72b5b4d96987258b40b4725-merged.mount: Deactivated successfully. Oct 5 05:57:42 localhost ceph-mgr[301561]: log_channel(audit) log [DBG] : from='client.44496 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:57:42 localhost ceph-mgr[301561]: [cephadm INFO root] Reconfig service osd.default_drive_group Oct 5 05:57:42 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost podman[318665]: Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:42 localhost podman[318665]: 2025-10-05 09:57:42.085855605 +0000 UTC m=+0.083213248 container create 712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_torvalds, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, version=7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost systemd[1]: Started libpod-conmon-712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9.scope. Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost systemd[1]: Started libcrun container. Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost podman[318665]: 2025-10-05 09:57:42.050696275 +0000 UTC m=+0.048053958 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost podman[318665]: 2025-10-05 09:57:42.161258301 +0000 UTC m=+0.158615954 container init 712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_torvalds, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost podman[318665]: 2025-10-05 09:57:42.174445937 +0000 UTC m=+0.171803580 container start 712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_torvalds, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, name=rhceph, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:42 localhost podman[318665]: 2025-10-05 09:57:42.175202378 +0000 UTC m=+0.172560551 container attach 712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_torvalds, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, release=553, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost fervent_torvalds[318680]: 167 167 Oct 5 05:57:42 localhost systemd[1]: libpod-712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9.scope: Deactivated successfully. Oct 5 05:57:42 localhost podman[318665]: 2025-10-05 09:57:42.181511168 +0000 UTC m=+0.178868871 container died 712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_torvalds, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost podman[318685]: 2025-10-05 09:57:42.279849853 +0000 UTC m=+0.084962635 container remove 712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_torvalds, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, RELEASE=main, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:57:42 localhost systemd[1]: libpod-conmon-712364a0e181188d73d29e8793577b043ae9bc6ab103051b95f7095608198ad9.scope: Deactivated successfully. Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:42 localhost ceph-mon[308154]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:42 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:42 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:42 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:42 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:42 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:42 localhost nova_compute[297021]: 2025-10-05 09:57:42.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:42 localhost systemd[1]: var-lib-containers-storage-overlay-5a03b9b02f69ae46aac6c74a9e21c6b2aa4daed53ed9d61c9bb15f40e4cd3b6c-merged.mount: Deactivated successfully. Oct 5 05:57:42 localhost nova_compute[297021]: 2025-10-05 09:57:42.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:43 localhost podman[318761]: Oct 5 05:57:43 localhost podman[318761]: 2025-10-05 09:57:43.18530503 +0000 UTC m=+0.078998693 container create 30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_knuth, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, release=553, version=7, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:57:43 localhost systemd[1]: Started libpod-conmon-30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f.scope. Oct 5 05:57:43 localhost systemd[1]: Started libcrun container. Oct 5 05:57:43 localhost podman[318761]: 2025-10-05 09:57:43.152497125 +0000 UTC m=+0.046190818 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:43 localhost podman[318761]: 2025-10-05 09:57:43.252107744 +0000 UTC m=+0.145801407 container init 30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_knuth, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, distribution-scope=public, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-type=git) Oct 5 05:57:43 localhost podman[318761]: 2025-10-05 09:57:43.261596611 +0000 UTC m=+0.155290274 container start 30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_knuth, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Oct 5 05:57:43 localhost podman[318761]: 2025-10-05 09:57:43.261844678 +0000 UTC m=+0.155538351 container attach 30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_knuth, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64) Oct 5 05:57:43 localhost happy_knuth[318776]: 167 167 Oct 5 05:57:43 localhost podman[318761]: 2025-10-05 09:57:43.265119146 +0000 UTC m=+0.158812819 container died 30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_knuth, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:43 localhost systemd[1]: libpod-30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f.scope: Deactivated successfully. Oct 5 05:57:43 localhost podman[318781]: 2025-10-05 09:57:43.363934534 +0000 UTC m=+0.088099910 container remove 30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_knuth, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Oct 5 05:57:43 localhost systemd[1]: libpod-conmon-30cbb4081f7ce988722dcd9d0a3715b91fe9e19fbc919f3486cdeb808f6eb44f.scope: Deactivated successfully. Oct 5 05:57:43 localhost ceph-mon[308154]: Reconfig service osd.default_drive_group Oct 5 05:57:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:43 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:43 localhost ceph-mon[308154]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:43 localhost systemd[1]: var-lib-containers-storage-overlay-7df10d7b15aa75f0b109efe9384989ed2c6134566ef1716d5bc92f4ba7ad7c09-merged.mount: Deactivated successfully. Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:43 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:43 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:43 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:43 localhost ceph-mgr[301561]: [progress INFO root] Writing back 50 completed events Oct 5 05:57:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:57:43 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost podman[318858]: Oct 5 05:57:44 localhost podman[318858]: 2025-10-05 09:57:44.241048376 +0000 UTC m=+0.084915673 container create 14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_goodall, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main) Oct 5 05:57:44 localhost systemd[1]: Started libpod-conmon-14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5.scope. Oct 5 05:57:44 localhost systemd[1]: Started libcrun container. Oct 5 05:57:44 localhost podman[318858]: 2025-10-05 09:57:44.207434849 +0000 UTC m=+0.051302146 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:44 localhost podman[318858]: 2025-10-05 09:57:44.312162297 +0000 UTC m=+0.156029594 container init 14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_goodall, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, version=7, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Oct 5 05:57:44 localhost podman[318858]: 2025-10-05 09:57:44.328603811 +0000 UTC m=+0.172471108 container start 14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_goodall, vcs-type=git, RELEASE=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Oct 5 05:57:44 localhost podman[318858]: 2025-10-05 09:57:44.329112845 +0000 UTC m=+0.172980152 container attach 14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_goodall, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:44 localhost amazing_goodall[318873]: 167 167 Oct 5 05:57:44 localhost systemd[1]: libpod-14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5.scope: Deactivated successfully. Oct 5 05:57:44 localhost podman[318858]: 2025-10-05 09:57:44.332223378 +0000 UTC m=+0.176090675 container died 14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_goodall, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, version=7, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:44 localhost podman[318879]: 2025-10-05 09:57:44.4349062 +0000 UTC m=+0.089847797 container remove 14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_goodall, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, RELEASE=main) Oct 5 05:57:44 localhost systemd[1]: libpod-conmon-14eb1338c46309c6071b0e88ee9c1bc6b007cf0a225977a2585dab99e03a10a5.scope: Deactivated successfully. Oct 5 05:57:44 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:44 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:57:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:57:44 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:57:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:44 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:44 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:44 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:44 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:44 localhost systemd[1]: var-lib-containers-storage-overlay-2692b8eb22484d958a072bde1ca4b62ae708af17dbbaefd2fcbf5ac3052709de-merged.mount: Deactivated successfully. Oct 5 05:57:45 localhost podman[318950]: Oct 5 05:57:45 localhost podman[318950]: 2025-10-05 09:57:45.183781101 +0000 UTC m=+0.080817133 container create a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mayer, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, architecture=x86_64) Oct 5 05:57:45 localhost systemd[1]: Started libpod-conmon-a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2.scope. Oct 5 05:57:45 localhost systemd[1]: Started libcrun container. Oct 5 05:57:45 localhost podman[318950]: 2025-10-05 09:57:45.1522391 +0000 UTC m=+0.049275142 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:45 localhost podman[318950]: 2025-10-05 09:57:45.255480536 +0000 UTC m=+0.152516578 container init a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mayer, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, ceph=True, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55) Oct 5 05:57:45 localhost podman[318950]: 2025-10-05 09:57:45.264672655 +0000 UTC m=+0.161708687 container start a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mayer, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, release=553, version=7, distribution-scope=public, architecture=x86_64, ceph=True, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Oct 5 05:57:45 localhost focused_mayer[318966]: 167 167 Oct 5 05:57:45 localhost podman[318950]: 2025-10-05 09:57:45.266197356 +0000 UTC m=+0.163233428 container attach a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mayer, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, architecture=x86_64, RELEASE=main, version=7) Oct 5 05:57:45 localhost systemd[1]: libpod-a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2.scope: Deactivated successfully. Oct 5 05:57:45 localhost podman[318950]: 2025-10-05 09:57:45.269790443 +0000 UTC m=+0.166826505 container died a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mayer, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Oct 5 05:57:45 localhost podman[318971]: 2025-10-05 09:57:45.372817575 +0000 UTC m=+0.091328697 container remove a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mayer, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux , release=553, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:57:45 localhost systemd[1]: libpod-conmon-a472e18bc6099462a93f9ba92bcf96b77e4bfc2d1f4ca5842f23f304cdf0e0a2.scope: Deactivated successfully. Oct 5 05:57:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:45 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:45 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:45 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:45 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:45 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:45 localhost systemd[1]: var-lib-containers-storage-overlay-694752a5100a29b7a1c84712eea991683228e1215e7f47fb8c7dfcfb573fc006-merged.mount: Deactivated successfully. Oct 5 05:57:45 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:45 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:45 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:45 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:45 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:46 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:46 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:46 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 5 05:57:46 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:46 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:46 localhost ceph-mgr[301561]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:46 localhost ceph-mgr[301561]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:46 localhost ceph-mgr[301561]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:57:46 localhost ceph-mon[308154]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:46 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mon[308154]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 5 05:57:46 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/821573250' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e85 do_prune osdmap full prune enabled Oct 5 05:57:46 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Activating manager daemon np0005471151.jecxod Oct 5 05:57:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 e86: 6 total, 6 up, 6 in Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr handle_mgr_map I was active but no longer am Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn e: '/usr/bin/ceph-mgr' Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn 0: '/usr/bin/ceph-mgr' Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn 1: '-n' Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn 2: 'mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn 3: '-f' Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn 4: '--setuser' Oct 5 05:57:46 localhost ceph-mgr[301561]: mgr respawn 5: 'ceph' Oct 5 05:57:46 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:46.990+0000 7fbc2c0f5640 -1 mgr handle_mgr_map I was active but no longer am Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e86: 6 total, 6 up, 6 in Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/821573250' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e33: np0005471151.jecxod(active, starting, since 0.0383044s), standbys: np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Manager daemon np0005471151.jecxod is now available Oct 5 05:57:47 localhost systemd[1]: session-73.scope: Deactivated successfully. Oct 5 05:57:47 localhost systemd[1]: session-73.scope: Consumed 29.630s CPU time. Oct 5 05:57:47 localhost systemd-logind[760]: Session 73 logged out. Waiting for processes to exit. Oct 5 05:57:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} v 0) Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:57:47 localhost systemd-logind[760]: Removed session 73. Oct 5 05:57:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: ignoring --setuser ceph since I am not root Oct 5 05:57:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: ignoring --setgroup ceph since I am not root Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"}]': finished Oct 5 05:57:47 localhost ceph-mgr[301561]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Oct 5 05:57:47 localhost ceph-mgr[301561]: pidfile_write: ignore empty --pid-file Oct 5 05:57:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} v 0) Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"}]': finished Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Loading python module 'alerts' Oct 5 05:57:47 localhost podman[318986]: 2025-10-05 09:57:47.176989438 +0000 UTC m=+0.090075093 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:57:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} v 0) Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:57:47 localhost podman[318986]: 2025-10-05 09:57:47.187856312 +0000 UTC m=+0.100941897 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Loading python module 'balancer' Oct 5 05:57:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:47.197+0000 7f986beb9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:57:47 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:57:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} v 0) Oct 5 05:57:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Loading python module 'cephadm' Oct 5 05:57:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:47.266+0000 7f986beb9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:57:47 localhost sshd[319028]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:57:47 localhost systemd-logind[760]: New session 74 of user ceph-admin. Oct 5 05:57:47 localhost systemd[1]: Started Session 74 of User ceph-admin. Oct 5 05:57:47 localhost nova_compute[297021]: 2025-10-05 09:57:47.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:47 localhost nova_compute[297021]: 2025-10-05 09:57:47.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:47 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:47 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:47 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/821573250' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: Activating manager daemon np0005471151.jecxod Oct 5 05:57:47 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/821573250' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:57:47 localhost ceph-mon[308154]: Manager daemon np0005471151.jecxod is now available Oct 5 05:57:47 localhost ceph-mon[308154]: removing stray HostCache host record np0005471148.localdomain.devices.0 Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"}]': finished Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"}]': finished Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Loading python module 'crash' Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:57:47 localhost ceph-mgr[301561]: mgr[py] Loading python module 'dashboard' Oct 5 05:57:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:47.884+0000 7f986beb9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e34: np0005471151.jecxod(active, since 1.05183s), standbys: np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Loading python module 'devicehealth' Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Loading python module 'diskprediction_local' Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:48.454+0000 7f986beb9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost systemd[1]: tmp-crun.jBDyHd.mount: Deactivated successfully. Oct 5 05:57:48 localhost podman[319143]: 2025-10-05 09:57:48.54626339 +0000 UTC m=+0.111421690 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=) Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: from numpy import show_config as show_numpy_config Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Loading python module 'influx' Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:48.595+0000 7f986beb9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost podman[319143]: 2025-10-05 09:57:48.653963698 +0000 UTC m=+0.219121988 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, distribution-scope=public, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Loading python module 'insights' Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:48.657+0000 7f986beb9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Loading python module 'iostat' Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:57:48 localhost ceph-mgr[301561]: mgr[py] Loading python module 'k8sevents' Oct 5 05:57:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:48.777+0000 7f986beb9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'localpool' Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'mds_autoscaler' Oct 5 05:57:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:49 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:49 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:49 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:49 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:49 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'mirroring' Oct 5 05:57:49 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'nfs' Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'orchestrator' Oct 5 05:57:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:49.551+0000 7f986beb9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:49.698+0000 7f986beb9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'osd_perf_query' Oct 5 05:57:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:49.764+0000 7f986beb9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'osd_support' Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'pg_autoscaler' Oct 5 05:57:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:49.829+0000 7f986beb9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'progress' Oct 5 05:57:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:49.898+0000 7f986beb9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:57:49 localhost ceph-mgr[301561]: mgr[py] Loading python module 'prometheus' Oct 5 05:57:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:49.963+0000 7f986beb9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost ceph-mon[308154]: [05/Oct/2025:09:57:48] ENGINE Bus STARTING Oct 5 05:57:50 localhost ceph-mon[308154]: [05/Oct/2025:09:57:48] ENGINE Serving on https://172.18.0.107:7150 Oct 5 05:57:50 localhost ceph-mon[308154]: [05/Oct/2025:09:57:48] ENGINE Client ('172.18.0.107', 58322) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:57:50 localhost ceph-mon[308154]: [05/Oct/2025:09:57:48] ENGINE Serving on http://172.18.0.107:8765 Oct 5 05:57:50 localhost ceph-mon[308154]: [05/Oct/2025:09:57:48] ENGINE Bus STARTED Oct 5 05:57:50 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Loading python module 'rbd_support' Oct 5 05:57:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:50.261+0000 7f986beb9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Loading python module 'restful' Oct 5 05:57:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:50.344+0000 7f986beb9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e35: np0005471151.jecxod(active, since 3s), standbys: np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:57:50 localhost systemd[1]: tmp-crun.yG5PCI.mount: Deactivated successfully. Oct 5 05:57:50 localhost podman[319351]: 2025-10-05 09:57:50.414104713 +0000 UTC m=+0.093401214 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 05:57:50 localhost podman[319351]: 2025-10-05 09:57:50.453739353 +0000 UTC m=+0.133035894 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:57:50 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Loading python module 'rgw' Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost ceph-mgr[301561]: mgr[py] Loading python module 'rook' Oct 5 05:57:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:50.682+0000 7f986beb9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 05:57:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:57:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'selftest' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.108+0000 7f986beb9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'snap_schedule' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.167+0000 7f986beb9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'stats' Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'status' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'telegraf' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.358+0000 7f986beb9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'telemetry' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.416+0000 7f986beb9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost podman[248506]: time="2025-10-05T09:57:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:57:51 localhost podman[248506]: @ - - [05/Oct/2025:09:57:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:57:51 localhost podman[248506]: @ - - [05/Oct/2025:09:57:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18858 "" "Go-http-client/1.1" Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'test_orchestrator' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.561+0000 7f986beb9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'volumes' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.704+0000 7f986beb9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Loading python module 'zabbix' Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.888+0000 7f986beb9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mgr[301561]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471150-zwqxye[301557]: 2025-10-05T09:57:51.946+0000 7f986beb9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:57:51 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : Standby manager daemon np0005471150.zwqxye started Oct 5 05:57:51 localhost ceph-mgr[301561]: ms_deliver_dispatch: unhandled message 0x561edab6f600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 5 05:57:51 localhost ceph-mgr[301561]: client.0 ms_handle_reset on v2:172.18.0.107:6810/1385085224 Oct 5 05:57:52 localhost openstack_network_exporter[250601]: ERROR 09:57:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:57:52 localhost openstack_network_exporter[250601]: ERROR 09:57:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:52 localhost openstack_network_exporter[250601]: ERROR 09:57:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:52 localhost openstack_network_exporter[250601]: ERROR 09:57:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:57:52 localhost openstack_network_exporter[250601]: Oct 5 05:57:52 localhost openstack_network_exporter[250601]: ERROR 09:57:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:57:52 localhost openstack_network_exporter[250601]: Oct 5 05:57:52 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:57:52 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:57:52 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:57:52 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:57:52 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:57:52 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:57:52 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:52 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:52 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:52 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:52 localhost nova_compute[297021]: 2025-10-05 09:57:52.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:52 localhost nova_compute[297021]: 2025-10-05 09:57:52.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:57:52 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e36: np0005471151.jecxod(active, since 5s), standbys: np0005471152.kbhlus, np0005471148.fayrer, np0005471150.zwqxye Oct 5 05:57:52 localhost podman[319856]: 2025-10-05 09:57:52.956450767 +0000 UTC m=+0.096465845 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute) Oct 5 05:57:52 localhost podman[319856]: 2025-10-05 09:57:52.967884996 +0000 UTC m=+0.107900054 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:57:52 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:57:53 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:53 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:53 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:57:53 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:57:53 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 5 05:57:53 localhost ceph-mon[308154]: log_channel(cluster) log [WRN] : Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 5 05:57:54 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:57:54 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:57:54 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 5 05:57:54 localhost ceph-mon[308154]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 5 05:57:54 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:55 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:55 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:55 localhost ceph-mon[308154]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:55 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:57:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:57:55 localhost podman[320088]: 2025-10-05 09:57:55.681025443 +0000 UTC m=+0.084794691 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible) Oct 5 05:57:55 localhost podman[320088]: 2025-10-05 09:57:55.696367556 +0000 UTC m=+0.100136804 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter) Oct 5 05:57:55 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:57:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:55 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:56 localhost ceph-mon[308154]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:57:56 localhost ceph-mon[308154]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:57:56 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:56 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:57:56 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:57:57 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[308154]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:57:57 localhost ceph-mon[308154]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:57:57 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[308154]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:57:57 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:57 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:57 localhost ceph-mon[308154]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:57:57 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost nova_compute[297021]: 2025-10-05 09:57:57.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:57 localhost nova_compute[297021]: 2025-10-05 09:57:57.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:57:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:57 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:57 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:57 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[308154]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:58 localhost ceph-mon[308154]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:57:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:59 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:59 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:57:59 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:57:59 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:57:59 localhost ceph-mon[308154]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:57:59 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:58:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:58:00 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:58:00 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:58:00 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:58:00 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:58:00 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:58:00 localhost ceph-mon[308154]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:58:00 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:58:00 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:58:01 localhost podman[320126]: 2025-10-05 09:58:01.000054889 +0000 UTC m=+0.076319813 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:58:01 localhost podman[320126]: 2025-10-05 09:58:01.008593169 +0000 UTC m=+0.084858083 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:58:01 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:58:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 5 05:58:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:58:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 5 05:58:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[308154]: Saving service mon spec with placement label:mon Oct 5 05:58:01 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:58:01 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:58:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:58:02 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:02 localhost podman[320220]: Oct 5 05:58:02 localhost podman[320220]: 2025-10-05 09:58:02.358399815 +0000 UTC m=+0.074638347 container create 202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_bassi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, RELEASE=main, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=) Oct 5 05:58:02 localhost systemd[1]: Started libpod-conmon-202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c.scope. Oct 5 05:58:02 localhost systemd[1]: Started libcrun container. Oct 5 05:58:02 localhost podman[320220]: 2025-10-05 09:58:02.328246781 +0000 UTC m=+0.044485343 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:58:02 localhost podman[320220]: 2025-10-05 09:58:02.430668736 +0000 UTC m=+0.146907268 container init 202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_bassi, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, maintainer=Guillaume Abrioux , ceph=True, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.33.12, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:58:02 localhost podman[320220]: 2025-10-05 09:58:02.440679166 +0000 UTC m=+0.156917708 container start 202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_bassi, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Oct 5 05:58:02 localhost podman[320220]: 2025-10-05 09:58:02.441477417 +0000 UTC m=+0.157715959 container attach 202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_bassi, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, release=553, version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.buildah.version=1.33.12, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main) Oct 5 05:58:02 localhost gracious_bassi[320235]: 167 167 Oct 5 05:58:02 localhost podman[320220]: 2025-10-05 09:58:02.445711602 +0000 UTC m=+0.161950144 container died 202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_bassi, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux ) Oct 5 05:58:02 localhost systemd[1]: libpod-202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c.scope: Deactivated successfully. Oct 5 05:58:02 localhost podman[320240]: 2025-10-05 09:58:02.54894955 +0000 UTC m=+0.090998819 container remove 202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_bassi, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, version=7, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., release=553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:58:02 localhost systemd[1]: libpod-conmon-202451b82a798b39c29f215dbe559eda739f4441319efd4d722e2bb731cff17c.scope: Deactivated successfully. Oct 5 05:58:02 localhost nova_compute[297021]: 2025-10-05 09:58:02.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:58:02 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:58:02 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:02 localhost nova_compute[297021]: 2025-10-05 09:58:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:03 localhost ceph-mon[308154]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:58:03 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:58:03 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:58:03 localhost systemd[1]: var-lib-containers-storage-overlay-00d0d911f292a6ab6c33a0d5037af0434241762b49846ebbb152108ff56f685a-merged.mount: Deactivated successfully. Oct 5 05:58:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:58:03 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:58:03 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:04 localhost ceph-mon[308154]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:58:04 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:58:04 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:04 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:04 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:58:04 localhost nova_compute[297021]: 2025-10-05 09:58:04.438 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:04 localhost nova_compute[297021]: 2025-10-05 09:58:04.438 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:58:04 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:58:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:04 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:58:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:05 localhost nova_compute[297021]: 2025-10-05 09:58:05.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:05 localhost ceph-mon[308154]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:58:05 localhost ceph-mon[308154]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:58:05 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:05 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:06 localhost nova_compute[297021]: 2025-10-05 09:58:06.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:58:06 localhost podman[320256]: 2025-10-05 09:58:06.676193598 +0000 UTC m=+0.083284370 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:58:06 localhost podman[320256]: 2025-10-05 09:58:06.689816056 +0000 UTC m=+0.096906478 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:58:06 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.441 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.442 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.442 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.443 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.443 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:58:07 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/261725041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.896 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.969 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:58:07 localhost nova_compute[297021]: 2025-10-05 09:58:07.969 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.195 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.196 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11671MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.197 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.197 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.265 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.266 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.266 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.302 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:58:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:58:08 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1191301426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.745 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.751 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.767 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.771 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:58:08 localhost nova_compute[297021]: 2025-10-05 09:58:08.772 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:58:09 localhost nova_compute[297021]: 2025-10-05 09:58:09.775 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e37: np0005471151.jecxod(active, since 24s), standbys: np0005471152.kbhlus, np0005471150.zwqxye Oct 5 05:58:11 localhost nova_compute[297021]: 2025-10-05 09:58:11.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:58:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:58:11 localhost podman[320323]: 2025-10-05 09:58:11.670012071 +0000 UTC m=+0.078788538 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 5 05:58:11 localhost podman[320323]: 2025-10-05 09:58:11.706767184 +0000 UTC m=+0.115543591 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:58:11 localhost systemd[1]: tmp-crun.kwjmGt.mount: Deactivated successfully. Oct 5 05:58:11 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:58:11 localhost podman[320324]: 2025-10-05 09:58:11.72735959 +0000 UTC m=+0.132195300 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:58:11 localhost podman[320324]: 2025-10-05 09:58:11.74479832 +0000 UTC m=+0.149634020 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:58:11 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:58:12 localhost nova_compute[297021]: 2025-10-05 09:58:12.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:12 localhost nova_compute[297021]: 2025-10-05 09:58:12.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:13 localhost nova_compute[297021]: 2025-10-05 09:58:13.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:13 localhost nova_compute[297021]: 2025-10-05 09:58:13.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:58:13 localhost nova_compute[297021]: 2025-10-05 09:58:13.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.020 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.021 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.021 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.022 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.410 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.429 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:58:14 localhost nova_compute[297021]: 2025-10-05 09:58:14.429 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:58:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:58:17 localhost nova_compute[297021]: 2025-10-05 09:58:17.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:17 localhost podman[320361]: 2025-10-05 09:58:17.676180151 +0000 UTC m=+0.085507089 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:58:17 localhost podman[320361]: 2025-10-05 09:58:17.683791666 +0000 UTC m=+0.093118654 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:58:17 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:58:17 localhost nova_compute[297021]: 2025-10-05 09:58:17.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:58:20.459 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:58:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:58:20.460 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:58:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:58:20.461 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:58:20 localhost podman[320377]: 2025-10-05 09:58:20.664518817 +0000 UTC m=+0.077471752 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 05:58:20 localhost podman[320377]: 2025-10-05 09:58:20.730788137 +0000 UTC m=+0.143741032 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Oct 5 05:58:20 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:58:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:21 localhost podman[248506]: time="2025-10-05T09:58:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:58:21 localhost podman[248506]: @ - - [05/Oct/2025:09:58:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:58:21 localhost podman[248506]: @ - - [05/Oct/2025:09:58:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18860 "" "Go-http-client/1.1" Oct 5 05:58:22 localhost openstack_network_exporter[250601]: ERROR 09:58:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:58:22 localhost openstack_network_exporter[250601]: ERROR 09:58:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:22 localhost openstack_network_exporter[250601]: ERROR 09:58:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:22 localhost openstack_network_exporter[250601]: ERROR 09:58:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:58:22 localhost openstack_network_exporter[250601]: Oct 5 05:58:22 localhost openstack_network_exporter[250601]: ERROR 09:58:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:58:22 localhost openstack_network_exporter[250601]: Oct 5 05:58:22 localhost nova_compute[297021]: 2025-10-05 09:58:22.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:22 localhost nova_compute[297021]: 2025-10-05 09:58:22.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:58:23 localhost podman[320402]: 2025-10-05 09:58:23.684383305 +0000 UTC m=+0.085013556 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute) Oct 5 05:58:23 localhost podman[320402]: 2025-10-05 09:58:23.719948615 +0000 UTC m=+0.120578846 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:58:23 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:58:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:58:26 localhost podman[320422]: 2025-10-05 09:58:26.67451958 +0000 UTC m=+0.076210109 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:58:26 localhost podman[320422]: 2025-10-05 09:58:26.690880791 +0000 UTC m=+0.092571310 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public) Oct 5 05:58:26 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:58:27 localhost nova_compute[297021]: 2025-10-05 09:58:27.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:27 localhost nova_compute[297021]: 2025-10-05 09:58:27.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:58:31 localhost podman[320442]: 2025-10-05 09:58:31.671848889 +0000 UTC m=+0.079359737 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:58:31 localhost podman[320442]: 2025-10-05 09:58:31.680284389 +0000 UTC m=+0.087795237 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:58:31 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:58:32 localhost nova_compute[297021]: 2025-10-05 09:58:32.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:32 localhost nova_compute[297021]: 2025-10-05 09:58:32.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:58:37 localhost nova_compute[297021]: 2025-10-05 09:58:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:37 localhost podman[320467]: 2025-10-05 09:58:37.674173852 +0000 UTC m=+0.086128082 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:58:37 localhost podman[320467]: 2025-10-05 09:58:37.683043083 +0000 UTC m=+0.094997293 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:58:37 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:58:37 localhost nova_compute[297021]: 2025-10-05 09:58:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.838 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.839 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.840 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.864 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '56f3c3d2-ed49-4b4e-9999-b6d28e7ef5e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.840368', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddb97132-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': 'f0d921e4fbb6e65402624ec5199a8b3e0cff19c274fd78da8209922feb1f06d2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.840368', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddb98adc-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '4cafe1751a2041073b79aba6e6c3f7eb08a022fa06662063857d54165842575c'}]}, 'timestamp': '2025-10-05 09:58:38.864950', '_unique_id': '9b60f33b525745b9b3da350ace197f0d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.866 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.868 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.873 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '761bfa2f-0e63-4c3e-9577-04dc908c4bea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.868204', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddbae698-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '2cad3d604d2e614348ea8b556fa70235c594ac20ca67a8ea3efc3e53ec58b9d2'}]}, 'timestamp': '2025-10-05 09:58:38.873881', '_unique_id': '52ac9c83097c4a948f6d8039ec0ae0fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.874 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.876 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.893 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '90ffb1f9-6b8a-4679-87f4-034238b4a751', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:58:38.876217', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'ddbe0ff8-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.117760256, 'message_signature': '6c270dbab6fbd9070939536bdae9211437ed7d79fe0a480ee96bdaf78e98336f'}]}, 'timestamp': '2025-10-05 09:58:38.894627', '_unique_id': 'd8526377ecbb49a2aaf1ea26154c0b3e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.896 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.897 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4d36bdc3-0fe4-4a25-b784-a9a54db055d5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.896998', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddbe87d0-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': 'd6222809fb66206d3792635b72edf2363cb89208744689da0489ac5f7facbc76'}]}, 'timestamp': '2025-10-05 09:58:38.897769', '_unique_id': 'ff13468e4989415e8d5cb7ce579e54a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.900 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2cc30b0c-bca8-4294-a2e5-1679134fc652', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.900919', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc0d990-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.1248627, 'message_signature': '86c8a7a3e8e84baef7ee21646a08b69647eaa0d2163ef12545a43be6ffb4964f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.900919', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc0f240-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.1248627, 'message_signature': '5f25fdb218e0162695378a908d374aa25eb5d804589ec3cbe044f683cdde7e09'}]}, 'timestamp': '2025-10-05 09:58:38.913508', '_unique_id': '24267818b9594b61a4556f4e428cef99'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.914 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.915 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95b4b4cb-86e6-4107-a2f3-22188b55c64c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.915882', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc16982-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '8461aec0b8c8fc1166c78986c0c236a3490aeb66fe5ee90bd305d438a1cfe552'}]}, 'timestamp': '2025-10-05 09:58:38.916629', '_unique_id': '47218c67e17540b7aabbfbf36e999b3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.919 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.919 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1a164d6e-9617-4793-9742-e9cfce321ee7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.919783', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc201c6-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '9ce060141c22e2ddf6da5280a46cdde0ccde3ade75e4e13b316b5383743453de'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.919783', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc21bfc-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '2afed38d55c9df5274ad70b5647eb42dce6474a2e0044c68507a279ae92d4cbd'}]}, 'timestamp': '2025-10-05 09:58:38.921120', '_unique_id': 'c697833b66f045c1a762183f14502b06'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.924 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.924 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2d4737b-c2c5-4d61-bebb-ce8f95033fca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.924346', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc2b634-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': 'cce5bdd5b459d687618bbf78004dd853b8305942ac3a47a2b286c75c0986a952'}]}, 'timestamp': '2025-10-05 09:58:38.925100', '_unique_id': '858ec42e153140488cbf60371dbc141d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.928 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.928 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f23a1a37-0c7d-4eae-a155-ed3174c1ac6f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.928260', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc34eb4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '096486a8481550fd408062a63965f1c504a353b99711c0a5418032affe39f433'}]}, 'timestamp': '2025-10-05 09:58:38.929007', '_unique_id': 'af3a5aac775a483b93f356f798c56ecd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.931 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8fbf983e-5941-41fb-9185-7545feedb28c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.932147', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc3e5f4-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': 'ae7527df7de23d4dc80ea79f68dcf3d4205fc67e9df57d2d0eb321d1aaf5b27d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.932147', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc3fe72-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '54dc4b17bd16b0df04ba127e94ce6029bbbd199e910890bbc0ab31d84bc01a35'}]}, 'timestamp': '2025-10-05 09:58:38.933507', '_unique_id': '4b83dc60e8cb4a989737eeb66195b4c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.934 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.936 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '63d60ce3-97b1-4751-859a-62726314fd3f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.937107', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc4a7d2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '3b6a1e8b946e09f314b18516862133235e97635cf7b6a9a4038f449c7d3e1649'}]}, 'timestamp': '2025-10-05 09:58:38.937864', '_unique_id': '5f9d5f0acc7243479413695d5468e535'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.940 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '63c40412-e581-4383-b162-88b9240c6b4a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.941210', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc547fa-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.1248627, 'message_signature': '0e95f34e08824f42960cfd53ffec820c2b2bf54b024794a4fa96ce3542e860f8'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.941210', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc561d6-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.1248627, 'message_signature': '9f2f04de3b68cf89e38a7433015b7b14fd6e7f67cb65f3070db7901389eb6e3d'}]}, 'timestamp': '2025-10-05 09:58:38.942519', '_unique_id': 'cf3da3cf16c84342aed2fedd64c0398a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7fe1261e-4c72-4e79-a29e-c4137e1a7009', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.943951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc5a970-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.1248627, 'message_signature': 'cec4ce1c9e03ccaf3910753b5d7a7d2128cfcc8413589081e1606409dab23ceb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.943951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc5b384-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.1248627, 'message_signature': '2c456c54ee567059cf568e94fab556b5eded3f338a74f3441ebdfbd1b4b22848'}]}, 'timestamp': '2025-10-05 09:58:38.944493', '_unique_id': '5308c67c078f4eab8dd628c5858fb5f5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd65f3d09-f8d9-428d-b8db-7ecbf2519751', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.945824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc5f29a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '3bf08241a4a3f59209b2c6b43d6bdfa83f47e147737f31ea2c3f6a42842844c2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.945824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc5fc9a-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '44e5bd27c199a8edf32bafdd46ba3081f0addc34d213e16f9a3637d4048330e9'}]}, 'timestamp': '2025-10-05 09:58:38.946343', '_unique_id': 'f18526fefe0347aca46b52382f9d0cef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '92ee08cb-6011-403b-86c3-5ac27f9a91ef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.947665', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc63aa2-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '0c83d62afd4f268fdbe7f43e3820af308860f586556b710b13f2012300068552'}]}, 'timestamp': '2025-10-05 09:58:38.947949', '_unique_id': 'c8eb0c25aa0a4742b45018f3ce3ff581'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.949 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.949 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dcf3db42-f60a-4e18-ae19-0307d6656bc0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.949212', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc67706-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '3e8fa4df14c00552bd16406641cc583ab37cff2735ecac7a940bd7b5d0c97b8f'}]}, 'timestamp': '2025-10-05 09:58:38.949521', '_unique_id': '57e6ea4b6fc04019bdda3785a7322c4e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.950 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e51fb055-3213-435e-be1a-7f5291e99a95', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.950802', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc6b50e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': '9100d45aae788542ffd97fe040208f5fceed8e2af9c592f76ab1e75e3f773825'}]}, 'timestamp': '2025-10-05 09:58:38.951081', '_unique_id': '12860c4f56454aefba14601f50463c6b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0604875c-cacf-4917-9e18-12663ce591a6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.952352', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc6f2c6-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '948735f0ace8a20694c64fa879a0c9714928aa4034473c1bac6eba7bc4b64ccb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.952352', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc6fcbc-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': 'a137ce12293a3160c54ac111b17c3acc95b741f0975eeb6570ae910bd15eafbb'}]}, 'timestamp': '2025-10-05 09:58:38.952900', '_unique_id': '4132245bc0db49e9afe5a9daa32bfd7e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0b5187b-c3ae-49fd-aaea-b7a57550f4ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T09:58:38.954389', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ddc74212-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '61ab572c6c2ba6317bf499ef06b9b3e7e4ee15f31bdcb4a454e65163310a1800'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T09:58:38.954389', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ddc74c26-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.064312024, 'message_signature': '5b8e44949301ed1b2eed37bac92b396631a35bc2715bda7ebce4b5a1a93fa34a'}]}, 'timestamp': '2025-10-05 09:58:38.954933', '_unique_id': '381073c40e2f40158ee4a29eac196752'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b347acbb-aa95-456c-ab4f-2e6cdaaad09f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T09:58:38.956231', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'ddc7893e-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.09215056, 'message_signature': 'ac618b203f9fe9499ade21d35f7434a49f4b191decbbe601a3200dad8b59fef2'}]}, 'timestamp': '2025-10-05 09:58:38.956535', '_unique_id': '129194dc61254f79984600b60740078e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 14330000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7c907f16-847a-479b-9bd4-be7578d88630', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14330000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T09:58:38.957817', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ddc7c714-a1d1-11f0-9396-fa163ec6f33d', 'monotonic_time': 11798.117760256, 'message_signature': 'dd6fd7e59cbc276d6e59e48243b435163b30063b5ea8dad10b5ce76584948460'}]}, 'timestamp': '2025-10-05 09:58:38.958089', '_unique_id': '48a9d194e7b540148c1c0bf0d6c6ed24'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging yield Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 05:58:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 09:58:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 05:58:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Oct 5 05:58:40 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1052023765' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Oct 5 05:58:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:58:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:58:42 localhost nova_compute[297021]: 2025-10-05 09:58:42.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:42 localhost podman[320489]: 2025-10-05 09:58:42.705295546 +0000 UTC m=+0.102804086 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:58:42 localhost nova_compute[297021]: 2025-10-05 09:58:42.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:42 localhost podman[320489]: 2025-10-05 09:58:42.725094774 +0000 UTC m=+0.122603264 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:58:42 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:58:42 localhost systemd[1]: tmp-crun.R7MuqC.mount: Deactivated successfully. Oct 5 05:58:42 localhost podman[320488]: 2025-10-05 09:58:42.829002168 +0000 UTC m=+0.229456638 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:58:42 localhost podman[320488]: 2025-10-05 09:58:42.864264377 +0000 UTC m=+0.264718847 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:58:42 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:58:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:47 localhost nova_compute[297021]: 2025-10-05 09:58:47.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:47 localhost nova_compute[297021]: 2025-10-05 09:58:47.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.584703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328584734, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2031, "num_deletes": 255, "total_data_size": 4389153, "memory_usage": 4735984, "flush_reason": "Manual Compaction"} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328608833, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3985915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20574, "largest_seqno": 22604, "table_properties": {"data_size": 3976706, "index_size": 5522, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22678, "raw_average_key_size": 22, "raw_value_size": 3956961, "raw_average_value_size": 3860, "num_data_blocks": 241, "num_entries": 1025, "num_filter_entries": 1025, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658256, "oldest_key_time": 1759658256, "file_creation_time": 1759658328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 24195 microseconds, and 9339 cpu microseconds. Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.608890) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3985915 bytes OK Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.608920) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.611678) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.611702) EVENT_LOG_v1 {"time_micros": 1759658328611694, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.611730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4379769, prev total WAL file size 4379769, number of live WAL files 2. Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.612873) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3892KB)], [36(15MB)] Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328612915, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 19767510, "oldest_snapshot_seqno": -1} Oct 5 05:58:48 localhost systemd[1]: tmp-crun.8AEsHB.mount: Deactivated successfully. Oct 5 05:58:48 localhost podman[320526]: 2025-10-05 09:58:48.684568142 +0000 UTC m=+0.091232860 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 11997 keys, 16779751 bytes, temperature: kUnknown Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328702592, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 16779751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16711978, "index_size": 36696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30021, "raw_key_size": 322492, "raw_average_key_size": 26, "raw_value_size": 16508364, "raw_average_value_size": 1376, "num_data_blocks": 1394, "num_entries": 11997, "num_filter_entries": 11997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.703125) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 16779751 bytes Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.706940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.0 rd, 186.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 15.1 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(9.2) write-amplify(4.2) OK, records in: 12543, records dropped: 546 output_compression: NoCompression Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.706973) EVENT_LOG_v1 {"time_micros": 1759658328706958, "job": 20, "event": "compaction_finished", "compaction_time_micros": 89838, "compaction_time_cpu_micros": 46437, "output_level": 6, "num_output_files": 1, "total_output_size": 16779751, "num_input_records": 12543, "num_output_records": 11997, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328707820, "job": 20, "event": "table_file_deletion", "file_number": 38} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328710374, "job": 20, "event": "table_file_deletion", "file_number": 36} Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.612796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.710537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.710547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.710550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.710553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:58:48.710557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost podman[320526]: 2025-10-05 09:58:48.731050616 +0000 UTC m=+0.137715354 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:58:48 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:58:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:51 localhost podman[248506]: time="2025-10-05T09:58:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:58:51 localhost podman[248506]: @ - - [05/Oct/2025:09:58:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:58:51 localhost podman[248506]: @ - - [05/Oct/2025:09:58:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18857 "" "Go-http-client/1.1" Oct 5 05:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:58:51 localhost podman[320542]: 2025-10-05 09:58:51.672093978 +0000 UTC m=+0.081000163 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:58:51 localhost podman[320542]: 2025-10-05 09:58:51.740273471 +0000 UTC m=+0.149179646 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:58:51 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:58:52 localhost openstack_network_exporter[250601]: ERROR 09:58:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:58:52 localhost openstack_network_exporter[250601]: ERROR 09:58:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:52 localhost openstack_network_exporter[250601]: ERROR 09:58:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:58:52 localhost openstack_network_exporter[250601]: Oct 5 05:58:52 localhost openstack_network_exporter[250601]: ERROR 09:58:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:52 localhost openstack_network_exporter[250601]: ERROR 09:58:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:58:52 localhost openstack_network_exporter[250601]: Oct 5 05:58:52 localhost nova_compute[297021]: 2025-10-05 09:58:52.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:52 localhost nova_compute[297021]: 2025-10-05 09:58:52.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:58:54 localhost podman[320566]: 2025-10-05 09:58:54.685192868 +0000 UTC m=+0.085845194 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:58:54 localhost podman[320566]: 2025-10-05 09:58:54.697493383 +0000 UTC m=+0.098145739 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:58:54 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:58:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:58:57 localhost podman[320585]: 2025-10-05 09:58:57.683660202 +0000 UTC m=+0.088304982 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, managed_by=edpm_ansible) Oct 5 05:58:57 localhost podman[320585]: 2025-10-05 09:58:57.699887503 +0000 UTC m=+0.104532263 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41) Oct 5 05:58:57 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:58:57 localhost nova_compute[297021]: 2025-10-05 09:58:57.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:58:57 localhost nova_compute[297021]: 2025-10-05 09:58:57.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:58:57 localhost nova_compute[297021]: 2025-10-05 09:58:57.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:58:57 localhost nova_compute[297021]: 2025-10-05 09:58:57.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:58:57 localhost nova_compute[297021]: 2025-10-05 09:58:57.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:58:57 localhost nova_compute[297021]: 2025-10-05 09:58:57.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:59:02 localhost podman[320606]: 2025-10-05 09:59:02.67929746 +0000 UTC m=+0.085242247 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:59:02 localhost podman[320606]: 2025-10-05 09:59:02.691854612 +0000 UTC m=+0.097799389 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:59:02 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:59:02 localhost nova_compute[297021]: 2025-10-05 09:59:02.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:02 localhost nova_compute[297021]: 2025-10-05 09:59:02.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:02 localhost nova_compute[297021]: 2025-10-05 09:59:02.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:02 localhost nova_compute[297021]: 2025-10-05 09:59:02.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:02 localhost nova_compute[297021]: 2025-10-05 09:59:02.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:02 localhost nova_compute[297021]: 2025-10-05 09:59:02.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:05 localhost nova_compute[297021]: 2025-10-05 09:59:05.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:59:05 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:59:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:06 localhost ceph-mon[308154]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:06 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:59:06 localhost nova_compute[297021]: 2025-10-05 09:59:06.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:06 localhost nova_compute[297021]: 2025-10-05 09:59:06.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:06 localhost nova_compute[297021]: 2025-10-05 09:59:06.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:59:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:59:07 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:59:07 localhost nova_compute[297021]: 2025-10-05 09:59:07.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:07 localhost ceph-mon[308154]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:59:07 localhost nova_compute[297021]: 2025-10-05 09:59:07.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:07 localhost nova_compute[297021]: 2025-10-05 09:59:07.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 05:59:08 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/494027579' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.451 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.452 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.452 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.453 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.453 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:59:08 localhost systemd[1]: tmp-crun.e66vzS.mount: Deactivated successfully. Oct 5 05:59:08 localhost podman[320725]: 2025-10-05 09:59:08.688706385 +0000 UTC m=+0.096110473 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:59:08 localhost podman[320725]: 2025-10-05 09:59:08.701834902 +0000 UTC m=+0.109239020 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:59:08 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:59:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:59:08 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1071172252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:59:08 localhost nova_compute[297021]: 2025-10-05 09:59:08.917 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.000 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.001 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.212 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.213 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11675MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.213 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.213 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.300 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.300 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.301 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.356 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:59:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:59:09 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3807899084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.816 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.823 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.841 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.843 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:59:09 localhost nova_compute[297021]: 2025-10-05 09:59:09.844 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:59:10 localhost nova_compute[297021]: 2025-10-05 09:59:10.844 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:10 localhost nova_compute[297021]: 2025-10-05 09:59:10.871 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:59:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e86 do_prune osdmap full prune enabled Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Activating manager daemon np0005471152.kbhlus Oct 5 05:59:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 e87: 6 total, 6 up, 6 in Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e87: 6 total, 6 up, 6 in Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e38: np0005471152.kbhlus(active, starting, since 0.0258468s), standbys: np0005471150.zwqxye Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Manager daemon np0005471152.kbhlus is now available Oct 5 05:59:11 localhost systemd[1]: session-74.scope: Deactivated successfully. Oct 5 05:59:11 localhost systemd[1]: session-74.scope: Consumed 7.944s CPU time. Oct 5 05:59:11 localhost systemd-logind[760]: Session 74 logged out. Waiting for processes to exit. Oct 5 05:59:11 localhost systemd-logind[760]: Removed session 74. Oct 5 05:59:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} v 0) Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-mon[308154]: from='client.? 172.18.0.200:0/4285775013' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:59:11 localhost ceph-mon[308154]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:59:11 localhost ceph-mon[308154]: Activating manager daemon np0005471152.kbhlus Oct 5 05:59:11 localhost ceph-mon[308154]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:59:11 localhost ceph-mon[308154]: Manager daemon np0005471152.kbhlus is now available Oct 5 05:59:11 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} v 0) Oct 5 05:59:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:59:11 localhost sshd[320781]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:59:11 localhost systemd-logind[760]: New session 75 of user ceph-admin. Oct 5 05:59:11 localhost systemd[1]: Started Session 75 of User ceph-admin. Oct 5 05:59:12 localhost nova_compute[297021]: 2025-10-05 09:59:12.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:12 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e39: np0005471152.kbhlus(active, since 1.04169s), standbys: np0005471150.zwqxye Oct 5 05:59:12 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:59:12 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:59:12 localhost nova_compute[297021]: 2025-10-05 09:59:12.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:12 localhost systemd[1]: tmp-crun.DxDK8e.mount: Deactivated successfully. Oct 5 05:59:12 localhost podman[320893]: 2025-10-05 09:59:12.910864019 +0000 UTC m=+0.109360543 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux , vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, name=rhceph, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:59:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:59:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:59:13 localhost systemd[1]: tmp-crun.LPLOwK.mount: Deactivated successfully. Oct 5 05:59:13 localhost podman[320914]: 2025-10-05 09:59:13.029721261 +0000 UTC m=+0.094491260 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:59:13 localhost podman[320914]: 2025-10-05 09:59:13.065975396 +0000 UTC m=+0.130745445 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:59:13 localhost podman[320913]: 2025-10-05 09:59:13.078599719 +0000 UTC m=+0.142952617 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:59:13 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:59:13 localhost podman[320893]: 2025-10-05 09:59:13.086589526 +0000 UTC m=+0.285086050 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, RELEASE=main, name=rhceph, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Oct 5 05:59:13 localhost podman[320913]: 2025-10-05 09:59:13.140109441 +0000 UTC m=+0.204462369 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 05:59:13 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:59:13 localhost nova_compute[297021]: 2025-10-05 09:59:13.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:13 localhost nova_compute[297021]: 2025-10-05 09:59:13.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:59:13 localhost nova_compute[297021]: 2025-10-05 09:59:13.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : Cluster is now healthy Oct 5 05:59:13 localhost ceph-mon[308154]: [05/Oct/2025:09:59:12] ENGINE Bus STARTING Oct 5 05:59:13 localhost ceph-mon[308154]: [05/Oct/2025:09:59:12] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:59:13 localhost ceph-mon[308154]: [05/Oct/2025:09:59:13] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:59:13 localhost ceph-mon[308154]: [05/Oct/2025:09:59:13] ENGINE Bus STARTED Oct 5 05:59:13 localhost ceph-mon[308154]: [05/Oct/2025:09:59:13] ENGINE Client ('172.18.0.108', 46836) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:59:13 localhost ceph-mon[308154]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Oct 5 05:59:13 localhost ceph-mon[308154]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Oct 5 05:59:13 localhost ceph-mon[308154]: Cluster is now healthy Oct 5 05:59:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:59:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:59:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost nova_compute[297021]: 2025-10-05 09:59:14.052 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:59:14 localhost nova_compute[297021]: 2025-10-05 09:59:14.053 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:59:14 localhost nova_compute[297021]: 2025-10-05 09:59:14.053 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 05:59:14 localhost nova_compute[297021]: 2025-10-05 09:59:14.053 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 05:59:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e40: np0005471152.kbhlus(active, since 3s), standbys: np0005471150.zwqxye Oct 5 05:59:15 localhost nova_compute[297021]: 2025-10-05 09:59:15.050 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 05:59:15 localhost nova_compute[297021]: 2025-10-05 09:59:15.077 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:59:15 localhost nova_compute[297021]: 2025-10-05 09:59:15.077 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.264858) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356264905, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 747, "num_deletes": 256, "total_data_size": 1395501, "memory_usage": 1415744, "flush_reason": "Manual Compaction"} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356273646, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1356453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22605, "largest_seqno": 23351, "table_properties": {"data_size": 1352564, "index_size": 1616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9414, "raw_average_key_size": 19, "raw_value_size": 1344161, "raw_average_value_size": 2771, "num_data_blocks": 68, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658329, "oldest_key_time": 1759658329, "file_creation_time": 1759658356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8834 microseconds, and 4339 cpu microseconds. Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.273690) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1356453 bytes OK Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.273712) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.276461) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.276572) EVENT_LOG_v1 {"time_micros": 1759658356276547, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.276648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1391444, prev total WAL file size 1391444, number of live WAL files 2. Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.277504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353137' seq:72057594037927935, type:22 .. '6B760031373733' seq:0, type:0; will stop at (end) Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1324KB)], [39(16MB)] Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356277772, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 18136204, "oldest_snapshot_seqno": -1} Oct 5 05:59:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : Standby manager daemon np0005471151.jecxod started Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 11940 keys, 16960888 bytes, temperature: kUnknown Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356383551, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 16960888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16894170, "index_size": 35769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29893, "raw_key_size": 323105, "raw_average_key_size": 27, "raw_value_size": 16692126, "raw_average_value_size": 1398, "num_data_blocks": 1338, "num_entries": 11940, "num_filter_entries": 11940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.383984) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 16960888 bytes Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.385907) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.2 rd, 160.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 16.0 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(25.9) write-amplify(12.5) OK, records in: 12482, records dropped: 542 output_compression: NoCompression Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.385941) EVENT_LOG_v1 {"time_micros": 1759658356385926, "job": 22, "event": "compaction_finished", "compaction_time_micros": 105927, "compaction_time_cpu_micros": 48167, "output_level": 6, "num_output_files": 1, "total_output_size": 16960888, "num_input_records": 12482, "num_output_records": 11940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356386326, "job": 22, "event": "table_file_deletion", "file_number": 41} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356388976, "job": 22, "event": "table_file_deletion", "file_number": 39} Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.277386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.389129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.389145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.389149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.389248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-09:59:16.389252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:59:16 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:16 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:59:16 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:16 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:16 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:16 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:16 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:17 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e41: np0005471152.kbhlus(active, since 5s), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 05:59:17 localhost nova_compute[297021]: 2025-10-05 09:59:17.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:17 localhost nova_compute[297021]: 2025-10-05 09:59:17.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:17 localhost nova_compute[297021]: 2025-10-05 09:59:17.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:17 localhost nova_compute[297021]: 2025-10-05 09:59:17.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:17 localhost nova_compute[297021]: 2025-10-05 09:59:17.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:17 localhost nova_compute[297021]: 2025-10-05 09:59:17.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:17 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:59:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:19 localhost ceph-mon[308154]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:19 localhost ceph-mon[308154]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:59:19 localhost podman[321844]: 2025-10-05 09:59:19.687943811 +0000 UTC m=+0.092174467 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:59:19 localhost podman[321844]: 2025-10-05 09:59:19.696955005 +0000 UTC m=+0.101185681 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:59:19 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:59:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:59:20.461 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:59:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:59:20.462 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:59:20 localhost ovn_metadata_agent[163429]: 2025-10-05 09:59:20.462 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:59:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:21 localhost podman[248506]: time="2025-10-05T09:59:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:59:21 localhost podman[248506]: @ - - [05/Oct/2025:09:59:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:59:21 localhost podman[248506]: @ - - [05/Oct/2025:09:59:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18874 "" "Go-http-client/1.1" Oct 5 05:59:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:59:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:22 localhost openstack_network_exporter[250601]: ERROR 09:59:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:22 localhost openstack_network_exporter[250601]: ERROR 09:59:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:22 localhost openstack_network_exporter[250601]: ERROR 09:59:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:59:22 localhost openstack_network_exporter[250601]: ERROR 09:59:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:59:22 localhost openstack_network_exporter[250601]: Oct 5 05:59:22 localhost openstack_network_exporter[250601]: ERROR 09:59:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:59:22 localhost openstack_network_exporter[250601]: Oct 5 05:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:59:22 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:22 localhost podman[321863]: 2025-10-05 09:59:22.671651752 +0000 UTC m=+0.082485003 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:59:22 localhost podman[321863]: 2025-10-05 09:59:22.751974725 +0000 UTC m=+0.162807986 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:59:22 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:59:22 localhost nova_compute[297021]: 2025-10-05 09:59:22.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:22 localhost nova_compute[297021]: 2025-10-05 09:59:22.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:59:25 localhost podman[321888]: 2025-10-05 09:59:25.675742208 +0000 UTC m=+0.087292004 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:59:25 localhost podman[321888]: 2025-10-05 09:59:25.693822069 +0000 UTC m=+0.105371795 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:59:25 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:59:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:27 localhost nova_compute[297021]: 2025-10-05 09:59:27.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:27 localhost nova_compute[297021]: 2025-10-05 09:59:27.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:27 localhost nova_compute[297021]: 2025-10-05 09:59:27.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:27 localhost nova_compute[297021]: 2025-10-05 09:59:27.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:27 localhost nova_compute[297021]: 2025-10-05 09:59:27.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:27 localhost nova_compute[297021]: 2025-10-05 09:59:27.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:59:28 localhost podman[321907]: 2025-10-05 09:59:28.674714444 +0000 UTC m=+0.085928627 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:59:28 localhost podman[321907]: 2025-10-05 09:59:28.687529042 +0000 UTC m=+0.098743265 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350) Oct 5 05:59:28 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 05:59:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:32 localhost nova_compute[297021]: 2025-10-05 09:59:32.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:32 localhost nova_compute[297021]: 2025-10-05 09:59:32.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:32 localhost nova_compute[297021]: 2025-10-05 09:59:32.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:32 localhost nova_compute[297021]: 2025-10-05 09:59:32.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:32 localhost nova_compute[297021]: 2025-10-05 09:59:32.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:32 localhost nova_compute[297021]: 2025-10-05 09:59:32.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 05:59:33 localhost podman[321929]: 2025-10-05 09:59:33.667369302 +0000 UTC m=+0.077736384 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:59:33 localhost podman[321929]: 2025-10-05 09:59:33.677039544 +0000 UTC m=+0.087406616 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:59:33 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 05:59:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:37 localhost nova_compute[297021]: 2025-10-05 09:59:37.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:37 localhost nova_compute[297021]: 2025-10-05 09:59:37.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:37 localhost nova_compute[297021]: 2025-10-05 09:59:37.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:37 localhost nova_compute[297021]: 2025-10-05 09:59:37.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:37 localhost nova_compute[297021]: 2025-10-05 09:59:37.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:37 localhost nova_compute[297021]: 2025-10-05 09:59:37.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 05:59:39 localhost podman[321952]: 2025-10-05 09:59:39.668639935 +0000 UTC m=+0.079004888 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:59:39 localhost podman[321952]: 2025-10-05 09:59:39.682870752 +0000 UTC m=+0.093235685 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:59:39 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 05:59:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:42 localhost nova_compute[297021]: 2025-10-05 09:59:42.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:42 localhost nova_compute[297021]: 2025-10-05 09:59:42.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:42 localhost nova_compute[297021]: 2025-10-05 09:59:42.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:42 localhost nova_compute[297021]: 2025-10-05 09:59:42.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:42 localhost nova_compute[297021]: 2025-10-05 09:59:42.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:42 localhost nova_compute[297021]: 2025-10-05 09:59:42.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 05:59:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 05:59:43 localhost podman[321976]: 2025-10-05 09:59:43.681070459 +0000 UTC m=+0.087585571 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid) Oct 5 05:59:43 localhost podman[321976]: 2025-10-05 09:59:43.719068892 +0000 UTC m=+0.125584034 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:59:43 localhost systemd[1]: tmp-crun.0FSeEB.mount: Deactivated successfully. Oct 5 05:59:43 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 05:59:43 localhost podman[321977]: 2025-10-05 09:59:43.737588005 +0000 UTC m=+0.140581232 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Oct 5 05:59:43 localhost podman[321977]: 2025-10-05 09:59:43.7528333 +0000 UTC m=+0.155826507 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:59:43 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 05:59:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:47 localhost nova_compute[297021]: 2025-10-05 09:59:47.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:47 localhost nova_compute[297021]: 2025-10-05 09:59:47.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:47 localhost nova_compute[297021]: 2025-10-05 09:59:47.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:47 localhost nova_compute[297021]: 2025-10-05 09:59:47.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:47 localhost nova_compute[297021]: 2025-10-05 09:59:47.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:47 localhost nova_compute[297021]: 2025-10-05 09:59:47.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 05:59:50 localhost podman[322014]: 2025-10-05 09:59:50.676501245 +0000 UTC m=+0.085815203 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Oct 5 05:59:50 localhost podman[322014]: 2025-10-05 09:59:50.712041381 +0000 UTC m=+0.121355319 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Oct 5 05:59:50 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 05:59:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:51 localhost podman[248506]: time="2025-10-05T09:59:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:59:51 localhost podman[248506]: @ - - [05/Oct/2025:09:59:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 05:59:51 localhost podman[248506]: @ - - [05/Oct/2025:09:59:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18866 "" "Go-http-client/1.1" Oct 5 05:59:52 localhost openstack_network_exporter[250601]: ERROR 09:59:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:52 localhost openstack_network_exporter[250601]: ERROR 09:59:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:52 localhost openstack_network_exporter[250601]: ERROR 09:59:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:59:52 localhost openstack_network_exporter[250601]: ERROR 09:59:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:59:52 localhost openstack_network_exporter[250601]: Oct 5 05:59:52 localhost openstack_network_exporter[250601]: ERROR 09:59:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:59:52 localhost openstack_network_exporter[250601]: Oct 5 05:59:52 localhost nova_compute[297021]: 2025-10-05 09:59:52.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:52 localhost nova_compute[297021]: 2025-10-05 09:59:52.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 05:59:52 localhost nova_compute[297021]: 2025-10-05 09:59:52.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 05:59:52 localhost nova_compute[297021]: 2025-10-05 09:59:52.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:52 localhost nova_compute[297021]: 2025-10-05 09:59:52.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:52 localhost nova_compute[297021]: 2025-10-05 09:59:52.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 05:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 05:59:53 localhost systemd[1]: tmp-crun.zQ9XkE.mount: Deactivated successfully. Oct 5 05:59:53 localhost podman[322033]: 2025-10-05 09:59:53.68167955 +0000 UTC m=+0.094106698 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 05:59:53 localhost podman[322033]: 2025-10-05 09:59:53.72285419 +0000 UTC m=+0.135281348 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 05:59:53 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 05:59:55 localhost sshd[322059]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:59:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 05:59:56 localhost systemd[1]: tmp-crun.jdEezM.mount: Deactivated successfully. Oct 5 05:59:56 localhost podman[322061]: 2025-10-05 09:59:56.685248671 +0000 UTC m=+0.092974858 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:59:56 localhost podman[322061]: 2025-10-05 09:59:56.719732279 +0000 UTC m=+0.127458486 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001) Oct 5 05:59:56 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 05:59:57 localhost nova_compute[297021]: 2025-10-05 09:59:57.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:57 localhost nova_compute[297021]: 2025-10-05 09:59:57.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 05:59:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 05:59:58 localhost podman[322081]: 2025-10-05 09:59:58.801963737 +0000 UTC m=+0.076901951 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:59:58 localhost podman[322081]: 2025-10-05 09:59:58.818816135 +0000 UTC m=+0.093754349 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9) Oct 5 05:59:58 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:00:00 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 5 06:00:00 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 06:00:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:03 localhost nova_compute[297021]: 2025-10-05 10:00:02.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:03 localhost nova_compute[297021]: 2025-10-05 10:00:03.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:03 localhost nova_compute[297021]: 2025-10-05 10:00:03.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5032 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:03 localhost nova_compute[297021]: 2025-10-05 10:00:03.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:03 localhost nova_compute[297021]: 2025-10-05 10:00:03.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:03 localhost nova_compute[297021]: 2025-10-05 10:00:03.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:00:04 localhost podman[322103]: 2025-10-05 10:00:04.68588628 +0000 UTC m=+0.086631335 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:00:04 localhost podman[322103]: 2025-10-05 10:00:04.692133461 +0000 UTC m=+0.092878516 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:00:04 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:00:05 localhost nova_compute[297021]: 2025-10-05 10:00:05.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:07 localhost nova_compute[297021]: 2025-10-05 10:00:07.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:07 localhost nova_compute[297021]: 2025-10-05 10:00:07.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:07 localhost nova_compute[297021]: 2025-10-05 10:00:07.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:08 localhost nova_compute[297021]: 2025-10-05 10:00:08.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:09 localhost nova_compute[297021]: 2025-10-05 10:00:09.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.443 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.444 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.444 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.444 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.445 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:00:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:00:10 localhost podman[322137]: 2025-10-05 10:00:10.68437244 +0000 UTC m=+0.088617701 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:00:10 localhost podman[322137]: 2025-10-05 10:00:10.723121602 +0000 UTC m=+0.127366823 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:00:10 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:00:10 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:00:10 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3425042657' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.920 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.976 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:00:10 localhost nova_compute[297021]: 2025-10-05 10:00:10.977 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.195 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.197 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11656MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.198 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.198 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.282 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.282 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.283 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:00:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.358 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:00:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:00:11 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3000583326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.800 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.809 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.833 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.836 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:00:11 localhost nova_compute[297021]: 2025-10-05 10:00:11.836 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:00:13 localhost nova_compute[297021]: 2025-10-05 10:00:13.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:13 localhost nova_compute[297021]: 2025-10-05 10:00:13.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:13 localhost nova_compute[297021]: 2025-10-05 10:00:13.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:13 localhost nova_compute[297021]: 2025-10-05 10:00:13.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:13 localhost nova_compute[297021]: 2025-10-05 10:00:13.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:13 localhost nova_compute[297021]: 2025-10-05 10:00:13.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:00:14 localhost podman[322193]: 2025-10-05 10:00:14.684995283 +0000 UTC m=+0.088837816 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:00:14 localhost podman[322193]: 2025-10-05 10:00:14.718080301 +0000 UTC m=+0.121922834 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:00:14 localhost systemd[1]: tmp-crun.UwQ2JV.mount: Deactivated successfully. Oct 5 06:00:14 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:00:14 localhost podman[322194]: 2025-10-05 10:00:14.743672178 +0000 UTC m=+0.146341160 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd) Oct 5 06:00:14 localhost podman[322194]: 2025-10-05 10:00:14.783991323 +0000 UTC m=+0.186660335 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 06:00:14 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:00:14 localhost nova_compute[297021]: 2025-10-05 10:00:14.837 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.533 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.534 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.534 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:00:15 localhost nova_compute[297021]: 2025-10-05 10:00:15.534 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:00:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:16 localhost nova_compute[297021]: 2025-10-05 10:00:16.389 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:00:16 localhost nova_compute[297021]: 2025-10-05 10:00:16.409 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:00:16 localhost nova_compute[297021]: 2025-10-05 10:00:16.410 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:00:18 localhost nova_compute[297021]: 2025-10-05 10:00:18.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:18 localhost nova_compute[297021]: 2025-10-05 10:00:18.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:18 localhost nova_compute[297021]: 2025-10-05 10:00:18.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:18 localhost nova_compute[297021]: 2025-10-05 10:00:18.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:18 localhost nova_compute[297021]: 2025-10-05 10:00:18.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:18 localhost nova_compute[297021]: 2025-10-05 10:00:18.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:00:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3428501207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:00:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:00:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3428501207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:00:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:00:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:00:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:00:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:00:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:00:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:00:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:00:20 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:20.462 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:00:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:20.463 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:00:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:20.464 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:00:20 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:00:20 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:21 localhost podman[248506]: time="2025-10-05T10:00:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:00:21 localhost podman[248506]: @ - - [05/Oct/2025:10:00:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 143852 "" "Go-http-client/1.1" Oct 5 06:00:21 localhost podman[248506]: @ - - [05/Oct/2025:10:00:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18873 "" "Go-http-client/1.1" Oct 5 06:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:00:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:00:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:21 localhost podman[322373]: 2025-10-05 10:00:21.677954311 +0000 UTC m=+0.080236792 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 06:00:21 localhost podman[322373]: 2025-10-05 10:00:21.710847786 +0000 UTC m=+0.113130247 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Oct 5 06:00:21 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:00:22 localhost openstack_network_exporter[250601]: ERROR 10:00:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:22 localhost openstack_network_exporter[250601]: ERROR 10:00:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:00:22 localhost openstack_network_exporter[250601]: ERROR 10:00:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:22 localhost openstack_network_exporter[250601]: ERROR 10:00:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:00:22 localhost openstack_network_exporter[250601]: Oct 5 06:00:22 localhost openstack_network_exporter[250601]: ERROR 10:00:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:00:22 localhost openstack_network_exporter[250601]: Oct 5 06:00:22 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:23 localhost nova_compute[297021]: 2025-10-05 10:00:23.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:23 localhost nova_compute[297021]: 2025-10-05 10:00:23.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:23 localhost nova_compute[297021]: 2025-10-05 10:00:23.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:23 localhost nova_compute[297021]: 2025-10-05 10:00:23.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:23 localhost nova_compute[297021]: 2025-10-05 10:00:23.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:23 localhost nova_compute[297021]: 2025-10-05 10:00:23.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:00:24 localhost systemd[1]: tmp-crun.MLw4mO.mount: Deactivated successfully. Oct 5 06:00:24 localhost podman[322391]: 2025-10-05 10:00:24.690551388 +0000 UTC m=+0.098211161 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:00:24 localhost podman[322391]: 2025-10-05 10:00:24.794167554 +0000 UTC m=+0.201827277 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 5 06:00:24 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:00:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:00:27 localhost podman[322416]: 2025-10-05 10:00:27.646418112 +0000 UTC m=+0.056729963 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm) Oct 5 06:00:27 localhost podman[322416]: 2025-10-05 10:00:27.68200636 +0000 UTC m=+0.092318201 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 06:00:27 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:00:28 localhost nova_compute[297021]: 2025-10-05 10:00:28.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:00:29 localhost systemd[1]: tmp-crun.JHvLy2.mount: Deactivated successfully. Oct 5 06:00:29 localhost podman[322435]: 2025-10-05 10:00:29.693344411 +0000 UTC m=+0.097652505 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public) Oct 5 06:00:29 localhost podman[322435]: 2025-10-05 10:00:29.706450168 +0000 UTC m=+0.110758322 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, config_id=edpm, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:00:29 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:00:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:33 localhost nova_compute[297021]: 2025-10-05 10:00:33.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:33 localhost nova_compute[297021]: 2025-10-05 10:00:33.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:33 localhost nova_compute[297021]: 2025-10-05 10:00:33.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:33 localhost nova_compute[297021]: 2025-10-05 10:00:33.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:33 localhost nova_compute[297021]: 2025-10-05 10:00:33.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:33 localhost nova_compute[297021]: 2025-10-05 10:00:33.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:35.028 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:00:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:35.029 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:00:35 localhost nova_compute[297021]: 2025-10-05 10:00:35.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:00:35 localhost podman[322454]: 2025-10-05 10:00:35.676191224 +0000 UTC m=+0.088284862 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:00:35 localhost podman[322454]: 2025-10-05 10:00:35.713939669 +0000 UTC m=+0.126033297 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:00:35 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:00:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:37.585 272040 INFO oslo.privsep.daemon [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp6jujdv6s/privsep.sock']#033[00m Oct 5 06:00:38 localhost nova_compute[297021]: 2025-10-05 10:00:38.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:38.193 272040 INFO oslo.privsep.daemon [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 06:00:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:38.075 322481 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 06:00:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:38.081 322481 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 06:00:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:38.084 322481 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Oct 5 06:00:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:38.084 322481 INFO oslo.privsep.daemon [-] privsep daemon running as pid 322481#033[00m Oct 5 06:00:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:38.747 272040 INFO oslo.privsep.daemon [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp8o9651vm/privsep.sock']#033[00m Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.836 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.837 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.838 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.856 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05d24514-816a-4a11-9717-a89e4ac87802', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:00:38.838231', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '253ee352-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.080157154, 'message_signature': '1c043e9ad033a453a2fb6e41ccf560bcb78ec4851e23f06f5b23bc473c43b4f8'}]}, 'timestamp': '2025-10-05 10:00:38.857107', '_unique_id': '69be4de8e80f4cacb6860eeef2c66942'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.858 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.860 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03fd5205-48c5-422e-b6fe-b10a556fa52f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.860739', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '253ffd8c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': 'ce30789a9a6d7cd84ad6b0ee71e022f96b80877a93be4b39b1580b4dd7d798d7'}]}, 'timestamp': '2025-10-05 10:00:38.864262', '_unique_id': '80ae12d07f5a428781dbdbfa24c824cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.866 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.867 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '990c93d2-2734-4440-a956-48376791cedb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.867111', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '25408428-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': '9a8f5731d15454e03149baa47a4f94472b022c426964a2a8e6533c2c0aeedaad'}]}, 'timestamp': '2025-10-05 10:00:38.867704', '_unique_id': '3f6bc7284b534ecb871bc7b0a9f18891'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.868 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.870 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.870 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.870 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 15000000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd63645bc-8126-4a36-995a-7be8a5ca455a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15000000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:00:38.870749', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '25411320-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.080157154, 'message_signature': 'd0d88ce9e34024aea83ad7ac3c12f473f2ea76ef0fad4bb4d5caa7266f083f84'}]}, 'timestamp': '2025-10-05 10:00:38.871348', '_unique_id': 'b6a71f4ef00746d68d42caa5268361cd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.873 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a2377ed-6400-49c9-ad3f-4077119675e9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.874214', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '254199f8-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': 'b56f6c78703c8d603038ca478eb826631cc9fd13a9d5a2d86877902805dc9657'}]}, 'timestamp': '2025-10-05 10:00:38.874814', '_unique_id': 'e9e44c4ff1a34f33a106fbfa7c59b43c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.875 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.887 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bcabe3f5-4aa2-4a1c-8e14-855dd420f49a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.877551', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2543a18a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.101467294, 'message_signature': '2a5c0cb6ffcb16d8221c4143f6f564efb5cdaf516e38873342878a3208224d08'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.877551', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2543b620-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.101467294, 'message_signature': '56ac0656a7dc815a4705f64c685a413438cd493e0c121540c72c9e3fc5e415d2'}]}, 'timestamp': '2025-10-05 10:00:38.888648', '_unique_id': '3b6f7ba0beeb4d24a98faa4f683f8e3d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.889 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.891 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.891 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '882e0791-d3eb-40a6-b810-27da0c525a3a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.891723', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '25444414-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': 'c39fb79b5a43db5d97ad6e8d561471c7123f91cb23c14b1a4320fd7e89061f86'}]}, 'timestamp': '2025-10-05 10:00:38.892275', '_unique_id': '76bc0b13774245a9be731a22b316d8b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.894 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5e882216-d3eb-4983-b9c7-2131f8dd8e66', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.895024', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '2544c4f2-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': '2847949385b4a8f58439cb134a58301e22f799a0483ff52caa9ea58902cec56a'}]}, 'timestamp': '2025-10-05 10:00:38.895607', '_unique_id': '98c356cba49842759dbfe6170aa462aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.898 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.898 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.916 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '82c9e2b2-d2ad-479b-877a-8daef38c735c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.898828', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25481c74-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '50da7a1f4781550ea6995927f4a635c53c86681f49475703975c113d194ef679'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.898828', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25482e1c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '3b9e7334ff7eeb9afe304098bf489c7f90d252f3634148cf1a31deef88f21f67'}]}, 'timestamp': '2025-10-05 10:00:38.917890', '_unique_id': '7f7ea8e31f3342aabc47a4ebd6390480'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.920 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13224fa9-10b0-4786-ab20-2f7524c0a487', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.920145', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '25489744-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': 'be19fb05d7efcd62b2e970a8c1a267ebd24e011f154975e3296fccb2eaad890a'}]}, 'timestamp': '2025-10-05 10:00:38.921163', '_unique_id': '9bfb46dac89945859640cb60ea3999c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.923 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7eba8298-4a06-4c8a-a971-7dccb9ab3d46', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.923907', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '25492d30-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': 'ccc5cf84a4992ba093bdf8ac9577e114409ef649b492548762f64e7f27fb58b8'}]}, 'timestamp': '2025-10-05 10:00:38.924491', '_unique_id': '533f15978a4e43b3b832ec30ef99d3e0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.927 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bcac3b4b-d25f-480c-b7f5-896b457bf45f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.927215', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2549af94-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '033de0ab8f2952e33268956c5df6f0079b06de2da16ba9b96935bcbd59a2315a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.927215', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2549c2f4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '6c070e774732529788c1ec0b3ef2121278a428c6ab3349fd7a676f269b637a90'}]}, 'timestamp': '2025-10-05 10:00:38.928259', '_unique_id': '35fa51a9f3bc4edbba0701f4b0479fcb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.930 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8ced0f3f-620c-44f2-90dd-528287141fc9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.931017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '254a4954-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.101467294, 'message_signature': 'a05f1c8d8149abd49f08b5aecccd48a2a7994309c0467043c7c51939ac4246bd'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.931017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '254a5cb4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.101467294, 'message_signature': 'f6c7595f0a5eaf52af0820cb4851d35e181a04a80709dc860f3e29dfe6069332'}]}, 'timestamp': '2025-10-05 10:00:38.932196', '_unique_id': 'c818c9713bb84e8f9db2dd538c4f2290'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.934 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '80d062ef-0883-424d-a830-bc1b62ed725a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.934953', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '254adc7a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '6106a99d4ffbbf425eba2778c7bb7ac304eafcb5653382a031f9576e6b4c7c61'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.934953', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '254af110-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': 'e24eaf4143bfd43a6167bd285d421b888772d62b72287f06d0d1c6095a58ebb2'}]}, 'timestamp': '2025-10-05 10:00:38.935998', '_unique_id': '5737f5e25f0a4970b0ba897aa5d3e5d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.939 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd980f2e3-ec08-4719-86fc-13765363d6d2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.938760', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '254b6e6a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '428bb7ee2f4745346a8b54b30c079d06f41f1023a5be573c151e877f060b2eea'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.938760', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '254b7a40-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': 'c2a222b2af4740ca2b66387515172098df904f40fafdd6e01ba2f83dacfe9302'}]}, 'timestamp': '2025-10-05 10:00:38.939451', '_unique_id': 'aaf5508fd6204785b0502b42b36f792e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3531fb37-cc56-46b4-acb5-445aaae27ca4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.941197', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '254bcdf6-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': '114635e3d8e6195b6c93f76bed04dbee50b737ff6d242c08dfd4147c7809f198'}]}, 'timestamp': '2025-10-05 10:00:38.941594', '_unique_id': '8767f8e771774ed4aca6dafcefee9dac'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20b51db1-b5f0-4672-a1f8-e681fa7f6b33', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.943309', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '254c2094-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': '80b743f5d01c1cd3178a743ebde3aa66b3f3be2860987bd1ec892840f9aef808'}]}, 'timestamp': '2025-10-05 10:00:38.943711', '_unique_id': '2d212a7c26b34957b45507a8826ac7c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a1980b59-f603-479d-8367-ac4509b0838a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.945466', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '254c7314-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '76d01406dd3f53a43d73146de7baa646c03aa61aa08128661d888b78c04bc689'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.945466', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '254c7fb2-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': '994eb7f0870a9b2fb58afc3f6a87e3d34dc23a8837629b09cfe767c55da03aad'}]}, 'timestamp': '2025-10-05 10:00:38.946128', '_unique_id': 'b5aa8292903041359216446640b1d183'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '437d6e0f-b563-469f-8c38-7cf3da177171', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.948057', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '254cd84a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': 'e48fc251f58ffaf960f0d694275afa9004fa881fc4bc30e9e3934ce880e65860'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.948057', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '254ce5ce-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.122750571, 'message_signature': 'aff2c99a5a807b9c1b2c4dfe678687ff575b69dc383e7bdaee0a9ecbabf0eb2f'}]}, 'timestamp': '2025-10-05 10:00:38.948746', '_unique_id': '2a7964c8fc3849a6950c8c407481e344'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.950 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '47409ef3-c059-4f74-9de6-8fc5ddb77abd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:00:38.950475', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '254d36c8-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.084655157, 'message_signature': '684a53adfcf6901d4b6b631729cbe8554f4b13d06ae3c518f8c2795fd86f43c1'}]}, 'timestamp': '2025-10-05 10:00:38.950824', '_unique_id': 'af9817985c9849f4a77ba71b8b5d4d29'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a63c1fd8-cd1f-41de-ab96-a5782fd14db2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:00:38.952491', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '254d8510-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.101467294, 'message_signature': '5b3a7827f9b42734f9b6902f2fd86169c5ad7a6fff50e49c4d4d2883ca7c3a2e'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:00:38.952491', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '254d90fa-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 11918.101467294, 'message_signature': '622125a2442052df7c3d583fe469476ad220f61fad76b773c040a2ad19e991bc'}]}, 'timestamp': '2025-10-05 10:00:38.953116', '_unique_id': '0319e27571924d538c97cbe2bf0945e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:00:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:00:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:00:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:39.362 272040 INFO oslo.privsep.daemon [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 06:00:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:39.256 322490 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 06:00:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:39.262 322490 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 06:00:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:39.266 322490 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Oct 5 06:00:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:39.266 322490 INFO oslo.privsep.daemon [-] privsep daemon running as pid 322490#033[00m Oct 5 06:00:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:40.031 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:00:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:40.281 272040 INFO oslo.privsep.daemon [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpx45pdt9n/privsep.sock']#033[00m Oct 5 06:00:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e87 do_prune osdmap full prune enabled Oct 5 06:00:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e88 e88: 6 total, 6 up, 6 in Oct 5 06:00:40 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e88: 6 total, 6 up, 6 in Oct 5 06:00:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:40.906 272040 INFO oslo.privsep.daemon [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 06:00:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:40.788 322502 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 06:00:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:40.791 322502 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 06:00:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:40.793 322502 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 5 06:00:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:40.794 322502 INFO oslo.privsep.daemon [-] privsep daemon running as pid 322502#033[00m Oct 5 06:00:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:00:41 localhost podman[322507]: 2025-10-05 10:00:41.709375654 +0000 UTC m=+0.103692680 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:00:41 localhost podman[322507]: 2025-10-05 10:00:41.718091922 +0000 UTC m=+0.112408998 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:00:41 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:00:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:42.295 272040 INFO neutron.agent.linux.ip_lib [None req-d31173cc-1b69-4a1f-b78d-4432c60f2227 - - - - - -] Device tap1b916c0a-fb cannot be used as it has no MAC address#033[00m Oct 5 06:00:42 localhost nova_compute[297021]: 2025-10-05 10:00:42.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:42 localhost kernel: device tap1b916c0a-fb entered promiscuous mode Oct 5 06:00:42 localhost ovn_controller[157794]: 2025-10-05T10:00:42Z|00069|binding|INFO|Claiming lport 1b916c0a-fb27-4c12-9a09-149188d6c993 for this chassis. Oct 5 06:00:42 localhost ovn_controller[157794]: 2025-10-05T10:00:42Z|00070|binding|INFO|1b916c0a-fb27-4c12-9a09-149188d6c993: Claiming unknown Oct 5 06:00:42 localhost NetworkManager[5981]: [1759658442.3884] manager: (tap1b916c0a-fb): new Generic device (/org/freedesktop/NetworkManager/Devices/18) Oct 5 06:00:42 localhost nova_compute[297021]: 2025-10-05 10:00:42.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:42 localhost systemd-udevd[322542]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:00:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:42.393 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-e10c03cc-e44d-4175-a3ad-388109591aab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e10c03cc-e44d-4175-a3ad-388109591aab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a85c9145616b45688274273209b8d6b3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d32139-0141-46c0-8377-97cd9b413e25, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1b916c0a-fb27-4c12-9a09-149188d6c993) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:00:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:42.395 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1b916c0a-fb27-4c12-9a09-149188d6c993 in datapath e10c03cc-e44d-4175-a3ad-388109591aab bound to our chassis#033[00m Oct 5 06:00:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:42.396 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0d5a1563-c2dd-434b-b62c-53956399454c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:00:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:42.396 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e10c03cc-e44d-4175-a3ad-388109591aab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:00:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:00:42.398 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[87836e3e-1c0c-49bc-bc64-18c41416f1cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:00:42 localhost journal[237931]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, ) Oct 5 06:00:42 localhost journal[237931]: hostname: np0005471150.localdomain Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost ovn_controller[157794]: 2025-10-05T10:00:42Z|00071|binding|INFO|Setting lport 1b916c0a-fb27-4c12-9a09-149188d6c993 ovn-installed in OVS Oct 5 06:00:42 localhost ovn_controller[157794]: 2025-10-05T10:00:42Z|00072|binding|INFO|Setting lport 1b916c0a-fb27-4c12-9a09-149188d6c993 up in Southbound Oct 5 06:00:42 localhost nova_compute[297021]: 2025-10-05 10:00:42.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost journal[237931]: ethtool ioctl error on tap1b916c0a-fb: No such device Oct 5 06:00:42 localhost nova_compute[297021]: 2025-10-05 10:00:42.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:42 localhost nova_compute[297021]: 2025-10-05 10:00:42.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e88 do_prune osdmap full prune enabled Oct 5 06:00:42 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e42: np0005471152.kbhlus(active, since 91s), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 06:00:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 e89: 6 total, 6 up, 6 in Oct 5 06:00:42 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Oct 5 06:00:43 localhost nova_compute[297021]: 2025-10-05 10:00:43.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:43 localhost podman[322614]: Oct 5 06:00:43 localhost podman[322614]: 2025-10-05 10:00:43.419525279 +0000 UTC m=+0.095471016 container create b6b5871259784d704d72601019b439da0a71293744a73e1c9a79d1788fec81f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e10c03cc-e44d-4175-a3ad-388109591aab, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:00:43 localhost podman[322614]: 2025-10-05 10:00:43.37249504 +0000 UTC m=+0.048440827 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:00:43 localhost systemd[1]: Started libpod-conmon-b6b5871259784d704d72601019b439da0a71293744a73e1c9a79d1788fec81f0.scope. Oct 5 06:00:43 localhost systemd[1]: tmp-crun.Ro2Qxk.mount: Deactivated successfully. Oct 5 06:00:43 localhost systemd[1]: Started libcrun container. Oct 5 06:00:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810a46f2a8f22a4246c235bda13b124f1c3cf547b1a8525584b6ae4daea4ff32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:00:43 localhost podman[322614]: 2025-10-05 10:00:43.50638514 +0000 UTC m=+0.182330877 container init b6b5871259784d704d72601019b439da0a71293744a73e1c9a79d1788fec81f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e10c03cc-e44d-4175-a3ad-388109591aab, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:00:43 localhost podman[322614]: 2025-10-05 10:00:43.515684832 +0000 UTC m=+0.191630569 container start b6b5871259784d704d72601019b439da0a71293744a73e1c9a79d1788fec81f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e10c03cc-e44d-4175-a3ad-388109591aab, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:00:43 localhost dnsmasq[322632]: started, version 2.85 cachesize 150 Oct 5 06:00:43 localhost dnsmasq[322632]: DNS service limited to local subnets Oct 5 06:00:43 localhost dnsmasq[322632]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:00:43 localhost dnsmasq[322632]: warning: no upstream servers configured Oct 5 06:00:43 localhost dnsmasq-dhcp[322632]: DHCP, static leases only on 192.168.199.0, lease time 1d Oct 5 06:00:43 localhost dnsmasq[322632]: read /var/lib/neutron/dhcp/e10c03cc-e44d-4175-a3ad-388109591aab/addn_hosts - 0 addresses Oct 5 06:00:43 localhost dnsmasq-dhcp[322632]: read /var/lib/neutron/dhcp/e10c03cc-e44d-4175-a3ad-388109591aab/host Oct 5 06:00:43 localhost dnsmasq-dhcp[322632]: read /var/lib/neutron/dhcp/e10c03cc-e44d-4175-a3ad-388109591aab/opts Oct 5 06:00:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:00:44.096 272040 INFO neutron.agent.dhcp.agent [None req-b81e15f0-b3a2-4877-a901-0e96e6cefe74 - - - - - -] DHCP configuration for ports {'a3b3dbd6-4bf4-4d39-a89b-64b66c6c627a'} is completed#033[00m Oct 5 06:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:00:45 localhost systemd[1]: tmp-crun.xc4AtM.mount: Deactivated successfully. Oct 5 06:00:45 localhost podman[322634]: 2025-10-05 10:00:45.697736914 +0000 UTC m=+0.098834268 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:00:45 localhost systemd[1]: tmp-crun.gV3q1B.mount: Deactivated successfully. Oct 5 06:00:45 localhost podman[322633]: 2025-10-05 10:00:45.737454543 +0000 UTC m=+0.138993338 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:00:45 localhost podman[322633]: 2025-10-05 10:00:45.77483099 +0000 UTC m=+0.176369785 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:00:45 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:00:45 localhost podman[322634]: 2025-10-05 10:00:45.790238418 +0000 UTC m=+0.191335712 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 06:00:45 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:00:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:48 localhost nova_compute[297021]: 2025-10-05 10:00:48.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:48 localhost nova_compute[297021]: 2025-10-05 10:00:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:48 localhost nova_compute[297021]: 2025-10-05 10:00:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:48 localhost nova_compute[297021]: 2025-10-05 10:00:48.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:48 localhost nova_compute[297021]: 2025-10-05 10:00:48.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:48 localhost nova_compute[297021]: 2025-10-05 10:00:48.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:51 localhost podman[248506]: time="2025-10-05T10:00:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:00:51 localhost podman[248506]: @ - - [05/Oct/2025:10:00:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:00:51 localhost podman[248506]: @ - - [05/Oct/2025:10:00:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19352 "" "Go-http-client/1.1" Oct 5 06:00:52 localhost openstack_network_exporter[250601]: ERROR 10:00:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:00:52 localhost openstack_network_exporter[250601]: ERROR 10:00:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:52 localhost openstack_network_exporter[250601]: ERROR 10:00:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:52 localhost openstack_network_exporter[250601]: ERROR 10:00:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:00:52 localhost openstack_network_exporter[250601]: Oct 5 06:00:52 localhost openstack_network_exporter[250601]: ERROR 10:00:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:00:52 localhost openstack_network_exporter[250601]: Oct 5 06:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:00:52 localhost podman[322674]: 2025-10-05 10:00:52.677224897 +0000 UTC m=+0.085836214 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent) Oct 5 06:00:52 localhost podman[322674]: 2025-10-05 10:00:52.712166587 +0000 UTC m=+0.120777864 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:00:52 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:00:53 localhost nova_compute[297021]: 2025-10-05 10:00:53.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:53 localhost nova_compute[297021]: 2025-10-05 10:00:53.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:00:53 localhost nova_compute[297021]: 2025-10-05 10:00:53.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:00:53 localhost nova_compute[297021]: 2025-10-05 10:00:53.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:53 localhost nova_compute[297021]: 2025-10-05 10:00:53.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:53 localhost nova_compute[297021]: 2025-10-05 10:00:53.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:00:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:00:55 localhost podman[322693]: 2025-10-05 10:00:55.675741001 +0000 UTC m=+0.088656791 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:00:55 localhost podman[322693]: 2025-10-05 10:00:55.794061607 +0000 UTC m=+0.206977427 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 06:00:55 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:00:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:58 localhost nova_compute[297021]: 2025-10-05 10:00:58.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:58 localhost nova_compute[297021]: 2025-10-05 10:00:58.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:00:58 localhost podman[322718]: 2025-10-05 10:00:58.673831303 +0000 UTC m=+0.083053757 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:00:58 localhost podman[322718]: 2025-10-05 10:00:58.684261297 +0000 UTC m=+0.093483781 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2) Oct 5 06:00:58 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:01:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:01:00 localhost systemd[1]: tmp-crun.r4fYHT.mount: Deactivated successfully. Oct 5 06:01:00 localhost podman[322737]: 2025-10-05 10:01:00.6838694 +0000 UTC m=+0.094807888 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7) Oct 5 06:01:00 localhost podman[322737]: 2025-10-05 10:01:00.699862664 +0000 UTC m=+0.110801142 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:01:00 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:01:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:03 localhost nova_compute[297021]: 2025-10-05 10:01:03.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:03 localhost nova_compute[297021]: 2025-10-05 10:01:03.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:03 localhost nova_compute[297021]: 2025-10-05 10:01:03.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:01:03 localhost nova_compute[297021]: 2025-10-05 10:01:03.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:03 localhost nova_compute[297021]: 2025-10-05 10:01:03.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:03 localhost nova_compute[297021]: 2025-10-05 10:01:03.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:05 localhost nova_compute[297021]: 2025-10-05 10:01:05.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:01:06 localhost systemd[1]: tmp-crun.lB017t.mount: Deactivated successfully. Oct 5 06:01:06 localhost podman[322768]: 2025-10-05 10:01:06.684599129 +0000 UTC m=+0.085498575 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:01:06 localhost podman[322768]: 2025-10-05 10:01:06.69789037 +0000 UTC m=+0.098789836 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:01:06 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:01:08 localhost nova_compute[297021]: 2025-10-05 10:01:08.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:08 localhost nova_compute[297021]: 2025-10-05 10:01:08.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:08 localhost nova_compute[297021]: 2025-10-05 10:01:08.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:01:08 localhost nova_compute[297021]: 2025-10-05 10:01:08.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:08 localhost nova_compute[297021]: 2025-10-05 10:01:08.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:08 localhost nova_compute[297021]: 2025-10-05 10:01:08.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:09 localhost nova_compute[297021]: 2025-10-05 10:01:09.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:09 localhost nova_compute[297021]: 2025-10-05 10:01:09.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:09 localhost nova_compute[297021]: 2025-10-05 10:01:09.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:09 localhost nova_compute[297021]: 2025-10-05 10:01:09.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:09 localhost nova_compute[297021]: 2025-10-05 10:01:09.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.444 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.444 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.444 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.445 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.445 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:01:10 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:01:10 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3560857866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.907 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.992 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:01:10 localhost nova_compute[297021]: 2025-10-05 10:01:10.993 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.195 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.199 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11395MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.199 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.199 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.282 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.282 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.283 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:01:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.354 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:01:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:01:11 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/687538955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.801 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.809 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.826 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.829 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:01:11 localhost nova_compute[297021]: 2025-10-05 10:01:11.829 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:01:12 localhost ovn_controller[157794]: 2025-10-05T10:01:12Z|00073|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Oct 5 06:01:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:01:12 localhost podman[322835]: 2025-10-05 10:01:12.670852614 +0000 UTC m=+0.079817541 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:01:12 localhost podman[322835]: 2025-10-05 10:01:12.684914226 +0000 UTC m=+0.093879143 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:01:12 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.830 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:13 localhost nova_compute[297021]: 2025-10-05 10:01:13.861 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:14 localhost nova_compute[297021]: 2025-10-05 10:01:14.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:16 localhost nova_compute[297021]: 2025-10-05 10:01:16.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:01:16 localhost podman[322859]: 2025-10-05 10:01:16.690509695 +0000 UTC m=+0.088786585 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:01:16 localhost podman[322859]: 2025-10-05 10:01:16.730841641 +0000 UTC m=+0.129118541 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 06:01:16 localhost podman[322858]: 2025-10-05 10:01:16.743155155 +0000 UTC m=+0.144606901 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:01:16 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:01:16 localhost podman[322858]: 2025-10-05 10:01:16.75581805 +0000 UTC m=+0.157269796 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible) Oct 5 06:01:16 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.566 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.566 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.567 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:01:17 localhost nova_compute[297021]: 2025-10-05 10:01:17.567 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:01:18 localhost nova_compute[297021]: 2025-10-05 10:01:18.314 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:01:18 localhost nova_compute[297021]: 2025-10-05 10:01:18.332 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:01:18 localhost nova_compute[297021]: 2025-10-05 10:01:18.332 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:01:18 localhost nova_compute[297021]: 2025-10-05 10:01:18.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:01:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1146005803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:01:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:01:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1146005803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:01:20 localhost nova_compute[297021]: 2025-10-05 10:01:20.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:20 localhost nova_compute[297021]: 2025-10-05 10:01:20.320 2 DEBUG oslo_concurrency.processutils [None req-3865f1ec-968a-440d-a01e-4904061c0ed7 4394e67b7ca5490b8879efa727fa0c6e 502bcc621c81441bb085df9f2c089996 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:01:20 localhost nova_compute[297021]: 2025-10-05 10:01:20.340 2 DEBUG oslo_concurrency.processutils [None req-3865f1ec-968a-440d-a01e-4904061c0ed7 4394e67b7ca5490b8879efa727fa0c6e 502bcc621c81441bb085df9f2c089996 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:01:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:20.463 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:01:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:20.464 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:01:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:20.464 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:01:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:01:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:01:21 localhost podman[248506]: time="2025-10-05T10:01:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:01:21 localhost podman[248506]: @ - - [05/Oct/2025:10:01:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:01:21 localhost podman[248506]: @ - - [05/Oct/2025:10:01:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19357 "" "Go-http-client/1.1" Oct 5 06:01:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:01:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:01:21 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:01:21 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:01:21 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:01:22 localhost openstack_network_exporter[250601]: ERROR 10:01:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:01:22 localhost openstack_network_exporter[250601]: ERROR 10:01:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:22 localhost openstack_network_exporter[250601]: ERROR 10:01:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:22 localhost openstack_network_exporter[250601]: ERROR 10:01:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:01:22 localhost openstack_network_exporter[250601]: Oct 5 06:01:22 localhost openstack_network_exporter[250601]: ERROR 10:01:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:01:22 localhost openstack_network_exporter[250601]: Oct 5 06:01:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:22.524 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:01:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:22.526 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:01:22 localhost nova_compute[297021]: 2025-10-05 10:01:22.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:23 localhost nova_compute[297021]: 2025-10-05 10:01:23.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:01:23 localhost podman[322984]: 2025-10-05 10:01:23.683276338 +0000 UTC m=+0.088353863 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:01:23 localhost podman[322984]: 2025-10-05 10:01:23.720833129 +0000 UTC m=+0.125910704 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:01:23 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:01:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:24.528 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:01:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:26 localhost nova_compute[297021]: 2025-10-05 10:01:26.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:01:26 localhost podman[323001]: 2025-10-05 10:01:26.671658636 +0000 UTC m=+0.078954177 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 06:01:26 localhost podman[323001]: 2025-10-05 10:01:26.736795168 +0000 UTC m=+0.144090699 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:26 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:01:28 localhost nova_compute[297021]: 2025-10-05 10:01:28.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:01:29 localhost nova_compute[297021]: 2025-10-05 10:01:29.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:29 localhost systemd[1]: tmp-crun.WnWRV4.mount: Deactivated successfully. Oct 5 06:01:29 localhost podman[323026]: 2025-10-05 10:01:29.720799577 +0000 UTC m=+0.126658973 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 06:01:29 localhost podman[323026]: 2025-10-05 10:01:29.735898987 +0000 UTC m=+0.141758383 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:01:29 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.872991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489873124, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1767, "num_deletes": 251, "total_data_size": 2307956, "memory_usage": 2341040, "flush_reason": "Manual Compaction"} Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489889618, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 2223010, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23352, "largest_seqno": 25118, "table_properties": {"data_size": 2215623, "index_size": 4340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16584, "raw_average_key_size": 21, "raw_value_size": 2200280, "raw_average_value_size": 2795, "num_data_blocks": 188, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658356, "oldest_key_time": 1759658356, "file_creation_time": 1759658489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 16659 microseconds, and 7929 cpu microseconds. Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.889673) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 2223010 bytes OK Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.889700) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.892553) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.892572) EVENT_LOG_v1 {"time_micros": 1759658489892566, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.892596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2300340, prev total WAL file size 2300340, number of live WAL files 2. Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.893795) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2170KB)], [42(16MB)] Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489893848, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 19183898, "oldest_snapshot_seqno": -1} Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12195 keys, 16719167 bytes, temperature: kUnknown Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489992639, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 16719167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16650887, "index_size": 36691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 329079, "raw_average_key_size": 26, "raw_value_size": 16444415, "raw_average_value_size": 1348, "num_data_blocks": 1375, "num_entries": 12195, "num_filter_entries": 12195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Oct 5 06:01:29 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.992988) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 16719167 bytes Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.001352) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.0 rd, 169.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 16.2 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(16.2) write-amplify(7.5) OK, records in: 12727, records dropped: 532 output_compression: NoCompression Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.001430) EVENT_LOG_v1 {"time_micros": 1759658490001368, "job": 24, "event": "compaction_finished", "compaction_time_micros": 98907, "compaction_time_cpu_micros": 47626, "output_level": 6, "num_output_files": 1, "total_output_size": 16719167, "num_input_records": 12727, "num_output_records": 12195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658490001891, "job": 24, "event": "table_file_deletion", "file_number": 44} Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658490004431, "job": 24, "event": "table_file_deletion", "file_number": 42} Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:29.893696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.004493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.004498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.004501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.004503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:01:30.004505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:01:31 localhost podman[323044]: 2025-10-05 10:01:31.678375037 +0000 UTC m=+0.083643324 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git) Oct 5 06:01:31 localhost podman[323044]: 2025-10-05 10:01:31.693926679 +0000 UTC m=+0.099195006 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 06:01:31 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:01:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:01:32.523 2 INFO neutron.agent.securitygroups_rpc [None req-369a5acc-81db-4453-b416-f03a8cf26524 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:01:33 localhost nova_compute[297021]: 2025-10-05 10:01:33.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:35 localhost neutron_sriov_agent[264984]: 2025-10-05 10:01:35.255 2 INFO neutron.agent.securitygroups_rpc [None req-20934a8c-c6ee-421b-bc8e-071eddd99f39 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:01:35 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:35.557 272040 INFO neutron.agent.linux.ip_lib [None req-b0ac6128-70b2-47c9-b80e-8cb5aee5f456 - - - - - -] Device tape627cbb3-47 cannot be used as it has no MAC address#033[00m Oct 5 06:01:35 localhost nova_compute[297021]: 2025-10-05 10:01:35.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:35 localhost kernel: device tape627cbb3-47 entered promiscuous mode Oct 5 06:01:35 localhost NetworkManager[5981]: [1759658495.6321] manager: (tape627cbb3-47): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Oct 5 06:01:35 localhost ovn_controller[157794]: 2025-10-05T10:01:35Z|00074|binding|INFO|Claiming lport e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 for this chassis. Oct 5 06:01:35 localhost ovn_controller[157794]: 2025-10-05T10:01:35Z|00075|binding|INFO|e627cbb3-4742-4b6f-9bf0-18e2b7cb4597: Claiming unknown Oct 5 06:01:35 localhost systemd-udevd[323075]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:01:35 localhost nova_compute[297021]: 2025-10-05 10:01:35.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:35.654 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3b3799eb-b69f-487f-9d2e-8e9111478409', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b3799eb-b69f-487f-9d2e-8e9111478409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca79c6dd41f44883b5382141d131a288', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd2bc314-ac1e-47fb-a371-837692084a56, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e627cbb3-4742-4b6f-9bf0-18e2b7cb4597) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:01:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:35.657 163434 INFO neutron.agent.ovn.metadata.agent [-] Port e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 in datapath 3b3799eb-b69f-487f-9d2e-8e9111478409 bound to our chassis#033[00m Oct 5 06:01:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:35.660 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 81af6cfe-92b4-40c2-b804-bc10fcb3b505 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:01:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:35.660 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b3799eb-b69f-487f-9d2e-8e9111478409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:01:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:35.661 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c4805b-e9a8-4e5d-a3ea-6ec06a24aef5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost nova_compute[297021]: 2025-10-05 10:01:35.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:35 localhost ovn_controller[157794]: 2025-10-05T10:01:35Z|00076|binding|INFO|Setting lport e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 ovn-installed in OVS Oct 5 06:01:35 localhost ovn_controller[157794]: 2025-10-05T10:01:35Z|00077|binding|INFO|Setting lport e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 up in Southbound Oct 5 06:01:35 localhost nova_compute[297021]: 2025-10-05 10:01:35.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost journal[237931]: ethtool ioctl error on tape627cbb3-47: No such device Oct 5 06:01:35 localhost nova_compute[297021]: 2025-10-05 10:01:35.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:35 localhost nova_compute[297021]: 2025-10-05 10:01:35.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:36 localhost podman[323146]: Oct 5 06:01:36 localhost podman[323146]: 2025-10-05 10:01:36.568456717 +0000 UTC m=+0.094647733 container create 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:01:36 localhost systemd[1]: Started libpod-conmon-4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648.scope. Oct 5 06:01:36 localhost podman[323146]: 2025-10-05 10:01:36.522504708 +0000 UTC m=+0.048695784 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:01:36 localhost systemd[1]: Started libcrun container. Oct 5 06:01:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51a5b07ff643e8a7fbfe1dd21a1aa57f0e7c1b521c71aad4963b3b96f2c8fbd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:01:36 localhost podman[323146]: 2025-10-05 10:01:36.642687545 +0000 UTC m=+0.168878551 container init 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:01:36 localhost podman[323146]: 2025-10-05 10:01:36.652144882 +0000 UTC m=+0.178335888 container start 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:01:36 localhost dnsmasq[323164]: started, version 2.85 cachesize 150 Oct 5 06:01:36 localhost dnsmasq[323164]: DNS service limited to local subnets Oct 5 06:01:36 localhost dnsmasq[323164]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:01:36 localhost dnsmasq[323164]: warning: no upstream servers configured Oct 5 06:01:36 localhost dnsmasq-dhcp[323164]: DHCP, static leases only on 19.80.0.0, lease time 1d Oct 5 06:01:36 localhost dnsmasq[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/addn_hosts - 0 addresses Oct 5 06:01:36 localhost dnsmasq-dhcp[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/host Oct 5 06:01:36 localhost dnsmasq-dhcp[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/opts Oct 5 06:01:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:36.714 272040 INFO neutron.agent.dhcp.agent [None req-cb7334c4-561c-4b1b-8680-b66f4d7f98a5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:35Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b5fb5f36-c849-4f81-ab48-bd2c70c82f8f, ip_allocation=immediate, mac_address=fa:16:3e:2f:77:a7, name=tempest-subport-237616748, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:01:32Z, description=, dns_domain=, id=3b3799eb-b69f-487f-9d2e-8e9111478409, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-750005168, port_security_enabled=True, project_id=ca79c6dd41f44883b5382141d131a288, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18167, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=202, status=ACTIVE, subnets=['b4f2aff6-173c-4d66-a13a-58dfa119b5b2'], tags=[], tenant_id=ca79c6dd41f44883b5382141d131a288, updated_at=2025-10-05T10:01:34Z, vlan_transparent=None, network_id=3b3799eb-b69f-487f-9d2e-8e9111478409, port_security_enabled=True, project_id=ca79c6dd41f44883b5382141d131a288, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c0bd513c-388e-4362-8f22-2404d7744c8b'], standard_attr_id=233, status=DOWN, tags=[], tenant_id=ca79c6dd41f44883b5382141d131a288, updated_at=2025-10-05T10:01:35Z on network 3b3799eb-b69f-487f-9d2e-8e9111478409#033[00m Oct 5 06:01:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:36.832 272040 INFO neutron.agent.dhcp.agent [None req-8d1f9fa6-7d07-4e9e-a3ec-a4fd6eaa5b99 - - - - - -] DHCP configuration for ports {'14736566-7986-4664-b838-97ef75bc59a4'} is completed#033[00m Oct 5 06:01:37 localhost dnsmasq[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/addn_hosts - 1 addresses Oct 5 06:01:37 localhost podman[323183]: 2025-10-05 10:01:37.006284698 +0000 UTC m=+0.058260045 container kill 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:01:37 localhost dnsmasq-dhcp[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/host Oct 5 06:01:37 localhost dnsmasq-dhcp[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/opts Oct 5 06:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:01:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:37.674 272040 INFO neutron.agent.dhcp.agent [None req-a41b0e06-cb9a-413c-b8f4-c4fff920aba9 - - - - - -] DHCP configuration for ports {'b5fb5f36-c849-4f81-ab48-bd2c70c82f8f'} is completed#033[00m Oct 5 06:01:37 localhost podman[323203]: 2025-10-05 10:01:37.678630254 +0000 UTC m=+0.084065886 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:01:37 localhost podman[323203]: 2025-10-05 10:01:37.689734035 +0000 UTC m=+0.095169607 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:01:37 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:01:38 localhost nova_compute[297021]: 2025-10-05 10:01:38.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:43 localhost nova_compute[297021]: 2025-10-05 10:01:43.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:43 localhost nova_compute[297021]: 2025-10-05 10:01:43.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:43 localhost nova_compute[297021]: 2025-10-05 10:01:43.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:01:43 localhost nova_compute[297021]: 2025-10-05 10:01:43.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:43 localhost nova_compute[297021]: 2025-10-05 10:01:43.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:43 localhost nova_compute[297021]: 2025-10-05 10:01:43.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:01:43 localhost podman[323226]: 2025-10-05 10:01:43.670437029 +0000 UTC m=+0.079523012 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:01:43 localhost podman[323226]: 2025-10-05 10:01:43.683900475 +0000 UTC m=+0.092986458 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:01:43 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:01:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:01:47 localhost podman[323250]: 2025-10-05 10:01:47.01096661 +0000 UTC m=+0.098157809 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Oct 5 06:01:47 localhost podman[323250]: 2025-10-05 10:01:47.048967662 +0000 UTC m=+0.136158851 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:47 localhost systemd[1]: tmp-crun.4p6xNv.mount: Deactivated successfully. Oct 5 06:01:47 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:01:47 localhost podman[323251]: 2025-10-05 10:01:47.066819938 +0000 UTC m=+0.149669010 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:01:47 localhost podman[323251]: 2025-10-05 10:01:47.07646321 +0000 UTC m=+0.159312332 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:47 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:01:48 localhost nova_compute[297021]: 2025-10-05 10:01:48.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:48 localhost nova_compute[297021]: 2025-10-05 10:01:48.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:48 localhost nova_compute[297021]: 2025-10-05 10:01:48.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:01:48 localhost nova_compute[297021]: 2025-10-05 10:01:48.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:48 localhost nova_compute[297021]: 2025-10-05 10:01:48.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:01:48 localhost nova_compute[297021]: 2025-10-05 10:01:48.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:01:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:51.215 272040 INFO neutron.agent.linux.ip_lib [None req-be22655e-5c07-44a4-b0d3-583d9b27d96d - - - - - -] Device tapcc68d2d0-cd cannot be used as it has no MAC address#033[00m Oct 5 06:01:51 localhost nova_compute[297021]: 2025-10-05 10:01:51.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:51 localhost kernel: device tapcc68d2d0-cd entered promiscuous mode Oct 5 06:01:51 localhost NetworkManager[5981]: [1759658511.2431] manager: (tapcc68d2d0-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/20) Oct 5 06:01:51 localhost nova_compute[297021]: 2025-10-05 10:01:51.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:51 localhost ovn_controller[157794]: 2025-10-05T10:01:51Z|00078|binding|INFO|Claiming lport cc68d2d0-cdaa-4495-848c-84e3ef78e69c for this chassis. Oct 5 06:01:51 localhost ovn_controller[157794]: 2025-10-05T10:01:51Z|00079|binding|INFO|cc68d2d0-cdaa-4495-848c-84e3ef78e69c: Claiming unknown Oct 5 06:01:51 localhost systemd-udevd[323300]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:01:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:51.266 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-9493e121-6caf-4009-9106-31c87685c480', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9493e121-6caf-4009-9106-31c87685c480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0269f0ba-15e7-46b3-9fe6-9a4bc91e9d33, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cc68d2d0-cdaa-4495-848c-84e3ef78e69c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:01:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:51.269 163434 INFO neutron.agent.ovn.metadata.agent [-] Port cc68d2d0-cdaa-4495-848c-84e3ef78e69c in datapath 9493e121-6caf-4009-9106-31c87685c480 bound to our chassis#033[00m Oct 5 06:01:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:51.271 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6d1e4624-6fb5-4702-a61e-2573f14d74f8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:01:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:51.272 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9493e121-6caf-4009-9106-31c87685c480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:01:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:01:51.273 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a05155f6-fb66-48f7-bd64-709d3a08fabb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost ovn_controller[157794]: 2025-10-05T10:01:51Z|00080|binding|INFO|Setting lport cc68d2d0-cdaa-4495-848c-84e3ef78e69c ovn-installed in OVS Oct 5 06:01:51 localhost ovn_controller[157794]: 2025-10-05T10:01:51Z|00081|binding|INFO|Setting lport cc68d2d0-cdaa-4495-848c-84e3ef78e69c up in Southbound Oct 5 06:01:51 localhost nova_compute[297021]: 2025-10-05 10:01:51.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost journal[237931]: ethtool ioctl error on tapcc68d2d0-cd: No such device Oct 5 06:01:51 localhost nova_compute[297021]: 2025-10-05 10:01:51.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:51 localhost nova_compute[297021]: 2025-10-05 10:01:51.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:51 localhost podman[248506]: time="2025-10-05T10:01:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:01:51 localhost podman[248506]: @ - - [05/Oct/2025:10:01:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147500 "" "Go-http-client/1.1" Oct 5 06:01:51 localhost podman[248506]: @ - - [05/Oct/2025:10:01:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19836 "" "Go-http-client/1.1" Oct 5 06:01:51 localhost nova_compute[297021]: 2025-10-05 10:01:51.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:52 localhost openstack_network_exporter[250601]: ERROR 10:01:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:52 localhost openstack_network_exporter[250601]: ERROR 10:01:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:52 localhost openstack_network_exporter[250601]: ERROR 10:01:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:01:52 localhost openstack_network_exporter[250601]: ERROR 10:01:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:01:52 localhost openstack_network_exporter[250601]: Oct 5 06:01:52 localhost openstack_network_exporter[250601]: ERROR 10:01:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:01:52 localhost openstack_network_exporter[250601]: Oct 5 06:01:52 localhost podman[323372]: Oct 5 06:01:52 localhost podman[323372]: 2025-10-05 10:01:52.235293214 +0000 UTC m=+0.099576529 container create 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:01:52 localhost systemd[1]: Started libpod-conmon-6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517.scope. Oct 5 06:01:52 localhost podman[323372]: 2025-10-05 10:01:52.188201014 +0000 UTC m=+0.052484348 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:01:52 localhost systemd[1]: Started libcrun container. Oct 5 06:01:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0270bf4b6aed4fbe454a52f88392f170cea66ac0ba8bb40d161fde5ac5b1fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:01:52 localhost podman[323372]: 2025-10-05 10:01:52.318143776 +0000 UTC m=+0.182427080 container init 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:01:52 localhost podman[323372]: 2025-10-05 10:01:52.327782159 +0000 UTC m=+0.192065463 container start 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:52 localhost dnsmasq[323390]: started, version 2.85 cachesize 150 Oct 5 06:01:52 localhost dnsmasq[323390]: DNS service limited to local subnets Oct 5 06:01:52 localhost dnsmasq[323390]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:01:52 localhost dnsmasq[323390]: warning: no upstream servers configured Oct 5 06:01:52 localhost dnsmasq-dhcp[323390]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:01:52 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 0 addresses Oct 5 06:01:52 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:01:52 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:01:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:52.490 272040 INFO neutron.agent.dhcp.agent [None req-743f6d3e-fcc2-482c-ade9-336acb8d2e1e - - - - - -] DHCP configuration for ports {'3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0'} is completed#033[00m Oct 5 06:01:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:52.837 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:52Z, description=, device_id=98de158a-6996-45af-ad6c-8e7f89620384, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=975150f4-0c25-493e-a648-81aab9b27eee, ip_allocation=immediate, mac_address=fa:16:3e:dd:4d:98, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:01:48Z, description=, dns_domain=, id=9493e121-6caf-4009-9106-31c87685c480, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-160158674-network, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10540, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=332, status=ACTIVE, subnets=['7c54b184-2873-4ced-a164-cebeaddca583'], tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:01:49Z, vlan_transparent=None, network_id=9493e121-6caf-4009-9106-31c87685c480, port_security_enabled=False, project_id=1b069d6351214d1baf4ff391a6512beb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=350, status=DOWN, tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:01:52Z on network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:01:53 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 1 addresses Oct 5 06:01:53 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:01:53 localhost podman[323407]: 2025-10-05 10:01:53.101664924 +0000 UTC m=+0.062368346 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:01:53 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:01:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:53.395 272040 INFO neutron.agent.dhcp.agent [None req-0d6f1af6-519d-4e74-91c3-505d2a13fec1 - - - - - -] DHCP configuration for ports {'975150f4-0c25-493e-a648-81aab9b27eee'} is completed#033[00m Oct 5 06:01:53 localhost nova_compute[297021]: 2025-10-05 10:01:53.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:01:54 localhost podman[323429]: 2025-10-05 10:01:54.679969615 +0000 UTC m=+0.085168176 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Oct 5 06:01:54 localhost podman[323429]: 2025-10-05 10:01:54.686732619 +0000 UTC m=+0.091931210 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 06:01:54 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:01:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:54.742 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:52Z, description=, device_id=98de158a-6996-45af-ad6c-8e7f89620384, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=975150f4-0c25-493e-a648-81aab9b27eee, ip_allocation=immediate, mac_address=fa:16:3e:dd:4d:98, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:01:48Z, description=, dns_domain=, id=9493e121-6caf-4009-9106-31c87685c480, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-160158674-network, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10540, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=332, status=ACTIVE, subnets=['7c54b184-2873-4ced-a164-cebeaddca583'], tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:01:49Z, vlan_transparent=None, network_id=9493e121-6caf-4009-9106-31c87685c480, port_security_enabled=False, project_id=1b069d6351214d1baf4ff391a6512beb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=350, status=DOWN, tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:01:52Z on network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:01:54 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 1 addresses Oct 5 06:01:54 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:01:54 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:01:54 localhost podman[323463]: 2025-10-05 10:01:54.98073571 +0000 UTC m=+0.057736541 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:01:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:01:55.157 272040 INFO neutron.agent.dhcp.agent [None req-0a91709d-2442-4d0d-b841-a2578358a3c1 - - - - - -] DHCP configuration for ports {'975150f4-0c25-493e-a648-81aab9b27eee'} is completed#033[00m Oct 5 06:01:55 localhost nova_compute[297021]: 2025-10-05 10:01:55.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:01:57 localhost podman[323485]: 2025-10-05 10:01:57.690722741 +0000 UTC m=+0.094733636 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:01:57 localhost podman[323485]: 2025-10-05 10:01:57.728994092 +0000 UTC m=+0.133005007 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:57 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.722 2 DEBUG nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Creating tmpfile /var/lib/nova/instances/tmp73y70yb5 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.747 2 DEBUG nova.compute.manager [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp73y70yb5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.774 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.775 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.795 2 INFO nova.compute.rpcapi [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Oct 5 06:01:58 localhost nova_compute[297021]: 2025-10-05 10:01:58.796 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:00 localhost nova_compute[297021]: 2025-10-05 10:02:00.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:02:00 localhost systemd[1]: tmp-crun.xtVBVy.mount: Deactivated successfully. Oct 5 06:02:00 localhost podman[323512]: 2025-10-05 10:02:00.704375947 +0000 UTC m=+0.110540975 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:02:00 localhost podman[323512]: 2025-10-05 10:02:00.718911082 +0000 UTC m=+0.125076090 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:02:00 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:02:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:01 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:01.391 2 INFO neutron.agent.securitygroups_rpc [req-73c8884d-fcac-478e-b650-1e996a105094 req-ac216bf8-85a5-4b0a-a857-5ba96f861d9f b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['6bdda02f-808a-473f-b12a-e76a2f226c0b']#033[00m Oct 5 06:02:01 localhost nova_compute[297021]: 2025-10-05 10:02:01.855 2 DEBUG nova.compute.manager [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp73y70yb5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Oct 5 06:02:01 localhost nova_compute[297021]: 2025-10-05 10:02:01.889 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquiring lock "refresh_cache-fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:01 localhost nova_compute[297021]: 2025-10-05 10:02:01.890 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquired lock "refresh_cache-fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:01 localhost nova_compute[297021]: 2025-10-05 10:02:01.890 2 DEBUG nova.network.neutron [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:02:02 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:02.356 2 INFO neutron.agent.securitygroups_rpc [req-7f63b2f6-ddd6-49c5-bce7-9008b92d1a67 req-6f282e34-8798-4b91-a623-4aa0b04785f4 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['3629377b-e072-4903-a05a-f6ff16e22cf7']#033[00m Oct 5 06:02:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:02:02 localhost podman[323532]: 2025-10-05 10:02:02.678312562 +0000 UTC m=+0.088336493 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:02:02 localhost podman[323532]: 2025-10-05 10:02:02.695777236 +0000 UTC m=+0.105801157 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64) Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.700 2 DEBUG nova.network.neutron [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Updating instance_info_cache with network_info: [{"id": "639ec525-18dc-48cb-9254-618d9c9ff42f", "address": "fa:16:3e:99:40:43", "network": {"id": "4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05", "bridge": "br-int", "label": "tempest-LiveMigrationTest-253727748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ca79c6dd41f44883b5382141d131a288", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap639ec525-18", "ovs_interfaceid": "639ec525-18dc-48cb-9254-618d9c9ff42f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:02 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.721 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Releasing lock "refresh_cache-fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.725 2 DEBUG nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp73y70yb5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.725 2 DEBUG nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Creating instance directory: /var/lib/nova/instances/fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.726 2 DEBUG nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Ensure instance console log exists: /var/lib/nova/instances/fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.726 2 DEBUG nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.727 2 DEBUG nova.virt.libvirt.vif [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-05T10:01:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1621729674',display_name='tempest-LiveMigrationTest-server-1621729674',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471151.localdomain',hostname='tempest-livemigrationtest-server-1621729674',id=6,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T10:01:54Z,launched_on='np0005471151.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005471151.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca79c6dd41f44883b5382141d131a288',ramdisk_id='',reservation_id='r-512r89p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-648867178',owner_user_name='tempest-LiveMigrationTest-648867178-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:01:54Z,user_data=None,user_id='2c39388980e04b87a9a048001f9e1b0b',uuid=fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "639ec525-18dc-48cb-9254-618d9c9ff42f", "address": "fa:16:3e:99:40:43", "network": {"id": "4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05", "bridge": "br-int", "label": "tempest-LiveMigrationTest-253727748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ca79c6dd41f44883b5382141d131a288", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap639ec525-18", "ovs_interfaceid": "639ec525-18dc-48cb-9254-618d9c9ff42f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.728 2 DEBUG nova.network.os_vif_util [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Converting VIF {"id": "639ec525-18dc-48cb-9254-618d9c9ff42f", "address": "fa:16:3e:99:40:43", "network": {"id": "4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05", "bridge": "br-int", "label": "tempest-LiveMigrationTest-253727748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ca79c6dd41f44883b5382141d131a288", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap639ec525-18", "ovs_interfaceid": "639ec525-18dc-48cb-9254-618d9c9ff42f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.729 2 DEBUG nova.network.os_vif_util [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:40:43,bridge_name='br-int',has_traffic_filtering=True,id=639ec525-18dc-48cb-9254-618d9c9ff42f,network=Network(4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap639ec525-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.729 2 DEBUG os_vif [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:40:43,bridge_name='br-int',has_traffic_filtering=True,id=639ec525-18dc-48cb-9254-618d9c9ff42f,network=Network(4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap639ec525-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.731 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap639ec525-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap639ec525-18, col_values=(('external_ids', {'iface-id': '639ec525-18dc-48cb-9254-618d9c9ff42f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:40:43', 'vm-uuid': 'fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.747 2 INFO os_vif [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:40:43,bridge_name='br-int',has_traffic_filtering=True,id=639ec525-18dc-48cb-9254-618d9c9ff42f,network=Network(4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap639ec525-18')#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.748 2 DEBUG nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Oct 5 06:02:02 localhost nova_compute[297021]: 2025-10-05 10:02:02.749 2 DEBUG nova.compute.manager [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp73y70yb5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Oct 5 06:02:03 localhost nova_compute[297021]: 2025-10-05 10:02:03.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:03 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:03.363 2 INFO neutron.agent.securitygroups_rpc [req-9dc7d59c-86cf-4175-be38-fa9e3d7cc1af req-b7105b62-6413-452f-a2bf-ca0e52fb3f07 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['8c680d2e-9a99-4414-88da-395392f19bf8']#033[00m Oct 5 06:02:03 localhost nova_compute[297021]: 2025-10-05 10:02:03.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:04 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:04.911 2 INFO neutron.agent.securitygroups_rpc [req-5b7f7d3b-1c30-4ad1-ad78-2f9c38b249e5 req-40cbc6af-2bd3-4a54-bd00-eed01726fc22 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['5dddcefb-2e07-4a82-bdc7-ae53daf271ad']#033[00m Oct 5 06:02:05 localhost nova_compute[297021]: 2025-10-05 10:02:05.464 2 DEBUG nova.network.neutron [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Port 639ec525-18dc-48cb-9254-618d9c9ff42f updated with migration profile {'migrating_to': 'np0005471150.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Oct 5 06:02:05 localhost nova_compute[297021]: 2025-10-05 10:02:05.467 2 DEBUG nova.compute.manager [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp73y70yb5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Oct 5 06:02:05 localhost sshd[323555]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:02:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:05.740 2 INFO neutron.agent.securitygroups_rpc [req-9c7c0f23-7c68-40ed-847c-e1348564d729 req-a1847abc-2bf9-4a6d-8989-b4ba28a6ff1e b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['dbfea3ca-3964-451b-9539-59e59bd91033']#033[00m Oct 5 06:02:05 localhost systemd-logind[760]: New session 76 of user nova. Oct 5 06:02:05 localhost systemd[1]: Created slice User Slice of UID 42436. Oct 5 06:02:05 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Oct 5 06:02:05 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Oct 5 06:02:05 localhost systemd[1]: Starting User Manager for UID 42436... Oct 5 06:02:06 localhost systemd[323559]: Queued start job for default target Main User Target. Oct 5 06:02:06 localhost systemd[323559]: Created slice User Application Slice. Oct 5 06:02:06 localhost systemd[323559]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 06:02:06 localhost systemd[323559]: Started Daily Cleanup of User's Temporary Directories. Oct 5 06:02:06 localhost systemd[323559]: Reached target Paths. Oct 5 06:02:06 localhost systemd[323559]: Reached target Timers. Oct 5 06:02:06 localhost systemd[323559]: Starting D-Bus User Message Bus Socket... Oct 5 06:02:06 localhost systemd[323559]: Starting Create User's Volatile Files and Directories... Oct 5 06:02:06 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:06.086 2 INFO neutron.agent.securitygroups_rpc [req-b4f4f790-735f-4fea-adc4-2e1e90b281a8 req-b22baf81-c307-4826-b189-de8fd7b88120 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['dbfea3ca-3964-451b-9539-59e59bd91033']#033[00m Oct 5 06:02:06 localhost systemd[323559]: Listening on D-Bus User Message Bus Socket. Oct 5 06:02:06 localhost systemd[323559]: Reached target Sockets. Oct 5 06:02:06 localhost systemd[323559]: Finished Create User's Volatile Files and Directories. Oct 5 06:02:06 localhost systemd[323559]: Reached target Basic System. Oct 5 06:02:06 localhost systemd[323559]: Reached target Main User Target. Oct 5 06:02:06 localhost systemd[323559]: Startup finished in 177ms. Oct 5 06:02:06 localhost systemd[1]: Started User Manager for UID 42436. Oct 5 06:02:06 localhost systemd[1]: Started Session 76 of User nova. Oct 5 06:02:06 localhost systemd[1]: Starting libvirt secret daemon... Oct 5 06:02:06 localhost systemd[1]: Started libvirt secret daemon. Oct 5 06:02:06 localhost kernel: device tap639ec525-18 entered promiscuous mode Oct 5 06:02:06 localhost NetworkManager[5981]: [1759658526.3466] manager: (tap639ec525-18): new Tun device (/org/freedesktop/NetworkManager/Devices/21) Oct 5 06:02:06 localhost nova_compute[297021]: 2025-10-05 10:02:06.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:06 localhost ovn_controller[157794]: 2025-10-05T10:02:06Z|00082|binding|INFO|Claiming lport 639ec525-18dc-48cb-9254-618d9c9ff42f for this additional chassis. Oct 5 06:02:06 localhost ovn_controller[157794]: 2025-10-05T10:02:06Z|00083|binding|INFO|639ec525-18dc-48cb-9254-618d9c9ff42f: Claiming fa:16:3e:99:40:43 10.100.0.5 Oct 5 06:02:06 localhost ovn_controller[157794]: 2025-10-05T10:02:06Z|00084|binding|INFO|Claiming lport b5fb5f36-c849-4f81-ab48-bd2c70c82f8f for this additional chassis. Oct 5 06:02:06 localhost ovn_controller[157794]: 2025-10-05T10:02:06Z|00085|binding|INFO|b5fb5f36-c849-4f81-ab48-bd2c70c82f8f: Claiming fa:16:3e:2f:77:a7 19.80.0.237 Oct 5 06:02:06 localhost systemd-udevd[323606]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:06 localhost NetworkManager[5981]: [1759658526.3754] device (tap639ec525-18): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 06:02:06 localhost NetworkManager[5981]: [1759658526.3763] device (tap639ec525-18): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 5 06:02:06 localhost ovn_controller[157794]: 2025-10-05T10:02:06Z|00086|binding|INFO|Setting lport 639ec525-18dc-48cb-9254-618d9c9ff42f ovn-installed in OVS Oct 5 06:02:06 localhost nova_compute[297021]: 2025-10-05 10:02:06.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:06 localhost systemd-machined[84982]: New machine qemu-3-instance-00000006. Oct 5 06:02:06 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000006. Oct 5 06:02:06 localhost nova_compute[297021]: 2025-10-05 10:02:06.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:06 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:06.534 2 INFO neutron.agent.securitygroups_rpc [req-9f948287-844a-4842-89bf-12bb1b82e63f req-3bd4bd2e-9087-420d-929b-49ab12f9cd68 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['dbfea3ca-3964-451b-9539-59e59bd91033']#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.111 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.112 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] VM Started (Lifecycle Event)#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.144 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.891 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.891 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] VM Resumed (Lifecycle Event)#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.926 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.933 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:02:07 localhost nova_compute[297021]: 2025-10-05 10:02:07.972 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] During the sync_power process the instance has moved from host np0005471151.localdomain to host np0005471150.localdomain#033[00m Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:40:43 10.100.0.5 Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:40:43 10.100.0.5 Oct 5 06:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:02:08 localhost systemd[1]: session-76.scope: Deactivated successfully. Oct 5 06:02:08 localhost systemd-logind[760]: Session 76 logged out. Waiting for processes to exit. Oct 5 06:02:08 localhost systemd-logind[760]: Removed session 76. Oct 5 06:02:08 localhost podman[323662]: 2025-10-05 10:02:08.313863934 +0000 UTC m=+0.118251815 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:02:08 localhost podman[323662]: 2025-10-05 10:02:08.323856846 +0000 UTC m=+0.128244687 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:02:08 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:02:08 localhost nova_compute[297021]: 2025-10-05 10:02:08.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:08 localhost nova_compute[297021]: 2025-10-05 10:02:08.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00087|binding|INFO|Claiming lport 639ec525-18dc-48cb-9254-618d9c9ff42f for this chassis. Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00088|binding|INFO|639ec525-18dc-48cb-9254-618d9c9ff42f: Claiming fa:16:3e:99:40:43 10.100.0.5 Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00089|binding|INFO|Claiming lport b5fb5f36-c849-4f81-ab48-bd2c70c82f8f for this chassis. Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00090|binding|INFO|b5fb5f36-c849-4f81-ab48-bd2c70c82f8f: Claiming fa:16:3e:2f:77:a7 19.80.0.237 Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00091|binding|INFO|Setting lport 639ec525-18dc-48cb-9254-618d9c9ff42f up in Southbound Oct 5 06:02:08 localhost ovn_controller[157794]: 2025-10-05T10:02:08Z|00092|binding|INFO|Setting lport b5fb5f36-c849-4f81-ab48-bd2c70c82f8f up in Southbound Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.906 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:40:43 10.100.0.5'], port_security=['fa:16:3e:99:40:43 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1503487834', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1503487834', 'neutron:project_id': 'ca79c6dd41f44883b5382141d131a288', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c0bd513c-388e-4362-8f22-2404d7744c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e78eb206-3248-4b38-9b4f-4b7a388ce8e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=639ec525-18dc-48cb-9254-618d9c9ff42f) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.909 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:77:a7 19.80.0.237'], port_security=['fa:16:3e:2f:77:a7 19.80.0.237'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['639ec525-18dc-48cb-9254-618d9c9ff42f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-237616748', 'neutron:cidrs': '19.80.0.237/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b3799eb-b69f-487f-9d2e-8e9111478409', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-237616748', 'neutron:project_id': 'ca79c6dd41f44883b5382141d131a288', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'c0bd513c-388e-4362-8f22-2404d7744c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=dd2bc314-ac1e-47fb-a371-837692084a56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b5fb5f36-c849-4f81-ab48-bd2c70c82f8f) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.911 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 639ec525-18dc-48cb-9254-618d9c9ff42f in datapath 4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 bound to our chassis#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.916 163434 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.928 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ff70bd-27b5-42a7-96c6-8fc75b85070f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.929 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fff204c-11 in ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.932 163567 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fff204c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.932 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dc181c-f4ea-4d32-b96c-3863ab84f9c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.933 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[4875b8ed-596d-4c15-825d-b0057279c985]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.945 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[149f47ab-a3ae-4c66-ba1f-4a7d06172442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:08 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:08.951 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-cc879607-5743-4489-b0ac-90de6767450d req-d4165476-19d9-4364-a43c-688336ea742e 18771fb2bfdc4183936e6691c1fde428 ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] This port is not SRIOV, skip binding for port 639ec525-18dc-48cb-9254-618d9c9ff42f.#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.965 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[91a2736a-dc9b-4699-9b01-7c5dfae980a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:08.994 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[c03a170d-5d0e-46dc-bcb3-9dcf4cd6b8ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.000 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[46159ac1-574e-4d02-8545-fd7398de21be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost NetworkManager[5981]: [1759658529.0022] manager: (tap4fff204c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Oct 5 06:02:09 localhost systemd-udevd[323609]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.033 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3863fc-64fc-4806-87a2-33684edd75c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.038 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[da7e9f88-3682-454f-9afa-953197ef907a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.048 2 INFO nova.compute.manager [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Post operation of migration started#033[00m Oct 5 06:02:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap4fff204c-11: link becomes ready Oct 5 06:02:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap4fff204c-10: link becomes ready Oct 5 06:02:09 localhost NetworkManager[5981]: [1759658529.0624] device (tap4fff204c-10): carrier: link connected Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.067 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[9109502b-023b-4a57-92b5-a0835542ffa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.086 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[23f8aa21-7477-4327-9d4e-43ad6262fe6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fff204c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:5c:27:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1200821, 'reachable_time': 26724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323711, 'error': None, 'target': 'ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.101 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[7fee7986-3717-46a1-ba3a-6a95e451fe3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:270d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1200821, 'tstamp': 1200821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323712, 'error': None, 'target': 'ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.116 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[76438485-7f95-400e-9ebb-ff2b421cbe18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fff204c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:5c:27:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1200821, 'reachable_time': 26724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323713, 'error': None, 'target': 'ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.148 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[63e78dee-6006-4f07-8f03-4e74743c3ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.205 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[575c59d6-59d2-4359-a8b7-fbafec522f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.207 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fff204c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.207 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.208 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fff204c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:09 localhost kernel: device tap4fff204c-10 entered promiscuous mode Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.215 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fff204c-10, col_values=(('external_ids', {'iface-id': 'b955c834-219b-4e54-b9a6-600f8ccb569f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:09 localhost ovn_controller[157794]: 2025-10-05T10:02:09Z|00093|binding|INFO|Releasing lport b955c834-219b-4e54-b9a6-600f8ccb569f from this chassis (sb_readonly=0) Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.227 163434 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.228 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf0671c-c346-4ae9-897c-cd5fd9c68144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.229 163434 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: global Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: log /dev/log local0 debug Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: log-tag haproxy-metadata-proxy-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: user root Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: group root Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: maxconn 1024 Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: pidfile /var/lib/neutron/external/pids/4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05.pid.haproxy Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: daemon Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: defaults Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: log global Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: mode http Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: option httplog Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: option dontlognull Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: option http-server-close Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: option forwardfor Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: retries 3 Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: timeout http-request 30s Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: timeout connect 30s Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: timeout client 32s Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: timeout server 32s Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: timeout http-keep-alive 30s Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: listen listener Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: bind 169.254.169.254:80 Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: http-request add-header X-OVN-Network-ID 4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.231 163434 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'env', 'PROCESS_TAG=haproxy-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.347 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquiring lock "refresh_cache-fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.348 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquired lock "refresh_cache-fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.348 2 DEBUG nova.network.neutron [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:09 localhost nova_compute[297021]: 2025-10-05 10:02:09.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:02:09 localhost podman[323746]: Oct 5 06:02:09 localhost podman[323746]: 2025-10-05 10:02:09.672497433 +0000 UTC m=+0.089851563 container create d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:09 localhost systemd[1]: Started libpod-conmon-d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d.scope. Oct 5 06:02:09 localhost podman[323746]: 2025-10-05 10:02:09.627932352 +0000 UTC m=+0.045286512 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:02:09 localhost systemd[1]: Started libcrun container. Oct 5 06:02:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc23af1704a6a606d22b280593f019bc5a95b2639f7602ac6b689fe4d0392aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:09 localhost podman[323746]: 2025-10-05 10:02:09.760541977 +0000 UTC m=+0.177896107 container init d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:09 localhost podman[323746]: 2025-10-05 10:02:09.769813729 +0000 UTC m=+0.187167849 container start d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:09 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [NOTICE] (323764) : New worker (323766) forked Oct 5 06:02:09 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [NOTICE] (323764) : Loading success. Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.826 163434 INFO neutron.agent.ovn.metadata.agent [-] Port b5fb5f36-c849-4f81-ab48-bd2c70c82f8f in datapath 3b3799eb-b69f-487f-9d2e-8e9111478409 unbound from our chassis#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.830 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 81af6cfe-92b4-40c2-b804-bc10fcb3b505 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.831 163434 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b3799eb-b69f-487f-9d2e-8e9111478409#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.839 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[38b67a27-106c-41e4-9d50-f69a235d5dca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.840 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b3799eb-b1 in ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.843 163567 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b3799eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.843 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1ee38b-aa22-4d72-994a-956e5b6750b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.845 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3175c5e4-1275-48ff-868b-e6a516ecc5f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.855 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[ad78e135-0d7c-497d-9bc5-5f081ba68d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.866 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[eb585212-f2c6-49b4-aa02-3e5acd41d2f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.890 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[163f5331-a6f0-44d1-a10f-3ab74848fd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.897 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[0fea5808-0d6d-40b9-9f82-4020771ee3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost NetworkManager[5981]: [1759658529.8999] manager: (tap3b3799eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/23) Oct 5 06:02:09 localhost systemd-udevd[323706]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.939 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[c94a44be-4d5d-4008-9638-e88239d92008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.945 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc0a764-3199-432b-b5fc-2263ea16ff6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost NetworkManager[5981]: [1759658529.9729] device (tap3b3799eb-b0): carrier: link connected Oct 5 06:02:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3b3799eb-b0: link becomes ready Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.978 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[b040e152-394d-44e0-8bf9-c28115e46b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:09.998 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[46f4ae2f-0b4a-426b-8f22-444de917760e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b3799eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:7f:bb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1200912, 'reachable_time': 16030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323785, 'error': None, 'target': 'ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.014 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[cef540d6-8f5b-4510-9018-2f86a4f303de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:bb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1200912, 'tstamp': 1200912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323786, 'error': None, 'target': 'ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.032 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[9e251c3b-0b92-4bd5-bc11-5bd02d15ca59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b3799eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:7f:bb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1200912, 'reachable_time': 16030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323787, 'error': None, 'target': 'ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.068 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[29a349a5-2e5a-45f0-aa5e-71d3480e7074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.098 2 DEBUG nova.network.neutron [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Updating instance_info_cache with network_info: [{"id": "639ec525-18dc-48cb-9254-618d9c9ff42f", "address": "fa:16:3e:99:40:43", "network": {"id": "4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05", "bridge": "br-int", "label": "tempest-LiveMigrationTest-253727748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ca79c6dd41f44883b5382141d131a288", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap639ec525-18", "ovs_interfaceid": "639ec525-18dc-48cb-9254-618d9c9ff42f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.122 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Releasing lock "refresh_cache-fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.138 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.139 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.141 2 DEBUG oslo_concurrency.lockutils [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.140 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[53926505-bbf5-4077-8182-57d0259745c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.143 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b3799eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.143 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.144 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b3799eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:10 localhost kernel: device tap3b3799eb-b0 entered promiscuous mode Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.180 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b3799eb-b0, col_values=(('external_ids', {'iface-id': '14736566-7986-4664-b838-97ef75bc59a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.180 2 INFO nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:10 localhost ovn_controller[157794]: 2025-10-05T10:02:10Z|00094|binding|INFO|Releasing lport 14736566-7986-4664-b838-97ef75bc59a4 from this chassis (sb_readonly=0) Oct 5 06:02:10 localhost journal[207037]: Domain id=3 name='instance-00000006' uuid=fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d is tainted: custom-monitor Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.194 163434 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b3799eb-b69f-487f-9d2e-8e9111478409.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b3799eb-b69f-487f-9d2e-8e9111478409.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.195 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[02cc297a-9df3-4cde-91ab-e828073ba416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.196 163434 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: global Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: log /dev/log local0 debug Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: log-tag haproxy-metadata-proxy-3b3799eb-b69f-487f-9d2e-8e9111478409 Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: user root Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: group root Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: maxconn 1024 Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: pidfile /var/lib/neutron/external/pids/3b3799eb-b69f-487f-9d2e-8e9111478409.pid.haproxy Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: daemon Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: defaults Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: log global Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: mode http Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: option httplog Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: option dontlognull Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: option http-server-close Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: option forwardfor Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: retries 3 Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: timeout http-request 30s Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: timeout connect 30s Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: timeout client 32s Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: timeout server 32s Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: timeout http-keep-alive 30s Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: listen listener Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: bind 169.254.169.254:80 Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: http-request add-header X-OVN-Network-ID 3b3799eb-b69f-487f-9d2e-8e9111478409 Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:02:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:10.197 163434 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409', 'env', 'PROCESS_TAG=haproxy-3b3799eb-b69f-487f-9d2e-8e9111478409', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b3799eb-b69f-487f-9d2e-8e9111478409.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:02:10 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:10.410 2 INFO neutron.agent.securitygroups_rpc [None req-9cd60413-135f-47c4-ab43-0a478fa866e3 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:10 localhost nova_compute[297021]: 2025-10-05 10:02:10.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:10.440 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:10Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1374da87-a9a5-4840-80a7-197494b76131, ip_allocation=immediate, mac_address=fa:16:3e:4b:06:97, name=tempest-parent-738433439, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:01:48Z, description=, dns_domain=, id=9493e121-6caf-4009-9106-31c87685c480, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-160158674-network, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10540, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=332, status=ACTIVE, subnets=['7c54b184-2873-4ced-a164-cebeaddca583'], tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:01:49Z, vlan_transparent=None, network_id=9493e121-6caf-4009-9106-31c87685c480, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149'], standard_attr_id=485, status=DOWN, tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:02:10Z on network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:02:10 localhost podman[323833]: Oct 5 06:02:10 localhost podman[323833]: 2025-10-05 10:02:10.660057057 +0000 UTC m=+0.098812856 container create 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:02:10 localhost podman[323833]: 2025-10-05 10:02:10.609285877 +0000 UTC m=+0.048041676 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:02:10 localhost systemd[1]: Started libpod-conmon-026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f.scope. Oct 5 06:02:10 localhost systemd[1]: Started libcrun container. Oct 5 06:02:10 localhost podman[323848]: 2025-10-05 10:02:10.736748871 +0000 UTC m=+0.122002556 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:02:10 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 2 addresses Oct 5 06:02:10 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:02:10 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:02:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecbab81b9a5111fc706542c059fb9cb0af6337cd044a34c19210b77d36c1854/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:10 localhost podman[323833]: 2025-10-05 10:02:10.749211891 +0000 UTC m=+0.187967690 container init 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:10 localhost podman[323833]: 2025-10-05 10:02:10.762542102 +0000 UTC m=+0.201297891 container start 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:02:10 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [NOTICE] (323869) : New worker (323873) forked Oct 5 06:02:10 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [NOTICE] (323869) : Loading success. Oct 5 06:02:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:10.976 272040 INFO neutron.agent.dhcp.agent [None req-292a653f-c413-44d3-8496-2c0892fc5201 - - - - - -] DHCP configuration for ports {'1374da87-a9a5-4840-80a7-197494b76131'} is completed#033[00m Oct 5 06:02:11 localhost nova_compute[297021]: 2025-10-05 10:02:11.195 2 INFO nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Oct 5 06:02:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:11 localhost nova_compute[297021]: 2025-10-05 10:02:11.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:11 localhost nova_compute[297021]: 2025-10-05 10:02:11.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:12.128 272040 INFO neutron.agent.linux.ip_lib [None req-7f93b159-fe86-4d84-a05a-92ede0665ecc - - - - - -] Device tap0f2d4d75-e7 cannot be used as it has no MAC address#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost kernel: device tap0f2d4d75-e7 entered promiscuous mode Oct 5 06:02:12 localhost NetworkManager[5981]: [1759658532.1578] manager: (tap0f2d4d75-e7): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00095|binding|INFO|Claiming lport 0f2d4d75-e75c-43f9-8fce-7465c9a57717 for this chassis. Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00096|binding|INFO|0f2d4d75-e75c-43f9-8fce-7465c9a57717: Claiming unknown Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:12.170 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c80697f7-3043-40b9-ba7e-9e4d45b917f9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f2d4d75-e75c-43f9-8fce-7465c9a57717) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:12.173 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 0f2d4d75-e75c-43f9-8fce-7465c9a57717 in datapath 3b6dd988-c148-4dbf-ae5b-dba073193ccc bound to our chassis#033[00m Oct 5 06:02:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:12.175 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3b6dd988-c148-4dbf-ae5b-dba073193ccc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:02:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:12.177 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1b86de20-60c4-46d3-9ff8-54872dc057c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00097|binding|INFO|Setting lport 0f2d4d75-e75c-43f9-8fce-7465c9a57717 ovn-installed in OVS Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00098|binding|INFO|Setting lport 0f2d4d75-e75c-43f9-8fce-7465c9a57717 up in Southbound Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.202 2 INFO nova.virt.libvirt.driver [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.208 2 DEBUG nova.compute.manager [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost journal[237931]: ethtool ioctl error on tap0f2d4d75-e7: No such device Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.229 2 DEBUG nova.objects.instance [None req-cc879607-5743-4489-b0ac-90de6767450d f7a00f21829e45d1bc7a04fd8128a175 c43807eec49f41f0803c57d27b774c57 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.440 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.441 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.442 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.442 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.443 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00099|binding|INFO|Releasing lport b955c834-219b-4e54-b9a6-600f8ccb569f from this chassis (sb_readonly=0) Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00100|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:12 localhost ovn_controller[157794]: 2025-10-05T10:02:12Z|00101|binding|INFO|Releasing lport 14736566-7986-4664-b838-97ef75bc59a4 from this chassis (sb_readonly=0) Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:12 localhost nova_compute[297021]: 2025-10-05 10:02:12.968 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.057 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.058 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.062 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.062 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:02:13 localhost podman[323990]: Oct 5 06:02:13 localhost podman[323990]: 2025-10-05 10:02:13.25088654 +0000 UTC m=+0.127786785 container create ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:13 localhost podman[323990]: 2025-10-05 10:02:13.186454308 +0000 UTC m=+0.063354653 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:02:13 localhost systemd[1]: Started libpod-conmon-ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e.scope. Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.299 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.302 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11109MB free_disk=41.70122146606445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.302 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.303 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:13 localhost systemd[1]: tmp-crun.zUKQJL.mount: Deactivated successfully. Oct 5 06:02:13 localhost systemd[1]: Started libcrun container. Oct 5 06:02:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bad2f41e9cbfa02a9a26d70764461c62da1bf606c4e946e61c1d7f215a1636/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.349 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Applying migration context for instance fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d as it has an incoming, in-progress migration e8b71380-772c-4d86-b17d-dcb4ff1cd267. Migration status is running _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.350 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Oct 5 06:02:13 localhost podman[323990]: 2025-10-05 10:02:13.357018055 +0000 UTC m=+0.233918370 container init ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.362 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Oct 5 06:02:13 localhost podman[323990]: 2025-10-05 10:02:13.366230865 +0000 UTC m=+0.243131140 container start ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:02:13 localhost dnsmasq[324008]: started, version 2.85 cachesize 150 Oct 5 06:02:13 localhost dnsmasq[324008]: DNS service limited to local subnets Oct 5 06:02:13 localhost dnsmasq[324008]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:02:13 localhost dnsmasq[324008]: warning: no upstream servers configured Oct 5 06:02:13 localhost dnsmasq-dhcp[324008]: DHCP, static leases only on 19.80.0.0, lease time 1d Oct 5 06:02:13 localhost dnsmasq[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/addn_hosts - 0 addresses Oct 5 06:02:13 localhost dnsmasq-dhcp[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/host Oct 5 06:02:13 localhost dnsmasq-dhcp[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/opts Oct 5 06:02:13 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:13.574 272040 INFO neutron.agent.dhcp.agent [None req-f3c96c40-4ea0-4692-af54-052ed6978a5b - - - - - -] DHCP configuration for ports {'bac74788-cacd-4240-bc16-90e5547e0313'} is completed#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.647 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.647 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.648 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.648 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1152MB phys_disk=41GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:02:13 localhost nova_compute[297021]: 2025-10-05 10:02:13.854 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.053 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.053 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.079 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:02:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.107 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:02:14 localhost podman[324009]: 2025-10-05 10:02:14.188123475 +0000 UTC m=+0.094326385 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.199 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:14 localhost podman[324009]: 2025-10-05 10:02:14.202733732 +0000 UTC m=+0.108936652 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:02:14 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.341 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Acquiring lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.342 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.343 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Acquiring lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.343 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.343 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.345 2 INFO nova.compute.manager [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Terminating instance#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.347 2 DEBUG nova.compute.manager [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Oct 5 06:02:14 localhost kernel: device tap639ec525-18 left promiscuous mode Oct 5 06:02:14 localhost NetworkManager[5981]: [1759658534.4116] device (tap639ec525-18): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00102|binding|INFO|Releasing lport 639ec525-18dc-48cb-9254-618d9c9ff42f from this chassis (sb_readonly=0) Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00103|binding|INFO|Setting lport 639ec525-18dc-48cb-9254-618d9c9ff42f down in Southbound Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00104|binding|INFO|Releasing lport b5fb5f36-c849-4f81-ab48-bd2c70c82f8f from this chassis (sb_readonly=0) Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00105|binding|INFO|Setting lport b5fb5f36-c849-4f81-ab48-bd2c70c82f8f down in Southbound Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00106|binding|INFO|Removing iface tap639ec525-18 ovn-installed in OVS Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.436 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:40:43 10.100.0.5'], port_security=['fa:16:3e:99:40:43 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1503487834', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1503487834', 'neutron:project_id': 'ca79c6dd41f44883b5382141d131a288', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c0bd513c-388e-4362-8f22-2404d7744c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e78eb206-3248-4b38-9b4f-4b7a388ce8e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=639ec525-18dc-48cb-9254-618d9c9ff42f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.439 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:77:a7 19.80.0.237'], port_security=['fa:16:3e:2f:77:a7 19.80.0.237'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['639ec525-18dc-48cb-9254-618d9c9ff42f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-237616748', 'neutron:cidrs': '19.80.0.237/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b3799eb-b69f-487f-9d2e-8e9111478409', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-237616748', 'neutron:project_id': 'ca79c6dd41f44883b5382141d131a288', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'c0bd513c-388e-4362-8f22-2404d7744c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=dd2bc314-ac1e-47fb-a371-837692084a56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b5fb5f36-c849-4f81-ab48-bd2c70c82f8f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e89 do_prune osdmap full prune enabled Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.442 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 639ec525-18dc-48cb-9254-618d9c9ff42f in datapath 4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 unbound from our chassis#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.446 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00107|binding|INFO|Releasing lport b955c834-219b-4e54-b9a6-600f8ccb569f from this chassis (sb_readonly=0) Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00108|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:14 localhost ovn_controller[157794]: 2025-10-05T10:02:14Z|00109|binding|INFO|Releasing lport 14736566-7986-4664-b838-97ef75bc59a4 from this chassis (sb_readonly=0) Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.447 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[4c997b3d-b994-475a-a48b-6b67c218dee6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.448 163434 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 namespace which is not needed anymore#033[00m Oct 5 06:02:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e90 e90: 6 total, 6 up, 6 in Oct 5 06:02:14 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully. Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 1.917s CPU time. Oct 5 06:02:14 localhost systemd-machined[84982]: Machine qemu-3-instance-00000006 terminated. Oct 5 06:02:14 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e90: 6 total, 6 up, 6 in Oct 5 06:02:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:14.494 2 INFO neutron.agent.securitygroups_rpc [None req-0ae8405b-5f6b-48b7-adaa-b729d895987d b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.595 2 INFO nova.virt.libvirt.driver [-] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Instance destroyed successfully.#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.596 2 DEBUG nova.objects.instance [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lazy-loading 'resources' on Instance uuid fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.615 2 DEBUG nova.compute.manager [req-10209c42-ee25-42c8-8854-26a632179321 req-2b50d856-90ca-48fb-a6cf-a6275d7eb7e2 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Received event network-vif-unplugged-639ec525-18dc-48cb-9254-618d9c9ff42f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.615 2 DEBUG oslo_concurrency.lockutils [req-10209c42-ee25-42c8-8854-26a632179321 req-2b50d856-90ca-48fb-a6cf-a6275d7eb7e2 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.616 2 DEBUG oslo_concurrency.lockutils [req-10209c42-ee25-42c8-8854-26a632179321 req-2b50d856-90ca-48fb-a6cf-a6275d7eb7e2 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [NOTICE] (323764) : haproxy version is 2.8.14-c23fe91 Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [NOTICE] (323764) : path to executable is /usr/sbin/haproxy Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [WARNING] (323764) : Exiting Master process... Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [WARNING] (323764) : Exiting Master process... Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.616 2 DEBUG oslo_concurrency.lockutils [req-10209c42-ee25-42c8-8854-26a632179321 req-2b50d856-90ca-48fb-a6cf-a6275d7eb7e2 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [ALERT] (323764) : Current worker (323766) exited with code 143 (Terminated) Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05[323760]: [WARNING] (323764) : All workers exited. Exiting... (0) Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.623 2 DEBUG nova.compute.manager [req-10209c42-ee25-42c8-8854-26a632179321 req-2b50d856-90ca-48fb-a6cf-a6275d7eb7e2 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] No waiting events found dispatching network-vif-unplugged-639ec525-18dc-48cb-9254-618d9c9ff42f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.624 2 DEBUG nova.compute.manager [req-10209c42-ee25-42c8-8854-26a632179321 req-2b50d856-90ca-48fb-a6cf-a6275d7eb7e2 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Received event network-vif-unplugged-639ec525-18dc-48cb-9254-618d9c9ff42f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 5 06:02:14 localhost systemd[1]: libpod-d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d.scope: Deactivated successfully. Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.626 2 DEBUG nova.virt.libvirt.vif [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-05T10:01:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1621729674',display_name='tempest-LiveMigrationTest-server-1621729674',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471150.localdomain',hostname='tempest-livemigrationtest-server-1621729674',id=6,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T10:01:54Z,launched_on='np0005471151.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005471150.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='ca79c6dd41f44883b5382141d131a288',ramdisk_id='',reservation_id='r-512r89p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-648867178',owner_user_name='tempest-LiveMigrationTest-648867178-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:02:12Z,user_data=None,user_id='2c39388980e04b87a9a048001f9e1b0b',uuid=fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "639ec525-18dc-48cb-9254-618d9c9ff42f", "address": "fa:16:3e:99:40:43", "network": {"id": "4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05", "bridge": "br-int", "label": "tempest-LiveMigrationTest-253727748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ca79c6dd41f44883b5382141d131a288", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap639ec525-18", "ovs_interfaceid": "639ec525-18dc-48cb-9254-618d9c9ff42f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.626 2 DEBUG nova.network.os_vif_util [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Converting VIF {"id": "639ec525-18dc-48cb-9254-618d9c9ff42f", "address": "fa:16:3e:99:40:43", "network": {"id": "4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05", "bridge": "br-int", "label": "tempest-LiveMigrationTest-253727748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "ca79c6dd41f44883b5382141d131a288", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap639ec525-18", "ovs_interfaceid": "639ec525-18dc-48cb-9254-618d9c9ff42f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.627 2 DEBUG nova.network.os_vif_util [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:40:43,bridge_name='br-int',has_traffic_filtering=True,id=639ec525-18dc-48cb-9254-618d9c9ff42f,network=Network(4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap639ec525-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.628 2 DEBUG os_vif [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:40:43,bridge_name='br-int',has_traffic_filtering=True,id=639ec525-18dc-48cb-9254-618d9c9ff42f,network=Network(4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap639ec525-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost podman[324074]: 2025-10-05 10:02:14.630131249 +0000 UTC m=+0.070893237 container died d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap639ec525-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:02:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:14 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3969694669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.689 2 INFO os_vif [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:40:43,bridge_name='br-int',has_traffic_filtering=True,id=639ec525-18dc-48cb-9254-618d9c9ff42f,network=Network(4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap639ec525-18')#033[00m Oct 5 06:02:14 localhost podman[324074]: 2025-10-05 10:02:14.707907984 +0000 UTC m=+0.148669922 container cleanup d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.709 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.716 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.739 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:02:14 localhost podman[324100]: 2025-10-05 10:02:14.740920061 +0000 UTC m=+0.066202041 container cleanup d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:14 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:14.745 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:14Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3fa04c44-9142-4d6c-991f-aca11ea8e8ee, ip_allocation=immediate, mac_address=fa:16:3e:ce:90:0e, name=tempest-subport-973969040, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:02:10Z, description=, dns_domain=, id=3b6dd988-c148-4dbf-ae5b-dba073193ccc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-305872545, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18311, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=491, status=ACTIVE, subnets=['704e7b93-9838-4681-8f85-756bdfaedce2'], tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:02:11Z, vlan_transparent=None, network_id=3b6dd988-c148-4dbf-ae5b-dba073193ccc, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149'], standard_attr_id=509, status=DOWN, tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, updated_at=2025-10-05T10:02:14Z on network 3b6dd988-c148-4dbf-ae5b-dba073193ccc#033[00m Oct 5 06:02:14 localhost systemd[1]: libpod-conmon-d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d.scope: Deactivated successfully. Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.771 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.772 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.773 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.774 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:02:14 localhost podman[324132]: 2025-10-05 10:02:14.784466174 +0000 UTC m=+0.061371659 container remove d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.788 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fc65cd-b53b-473a-a5d0-e58a81c0345f]: (4, ('Sun Oct 5 10:02:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 (d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d)\nd3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d\nSun Oct 5 10:02:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 (d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d)\nd3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.789 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a0996a32-9e1f-4d58-8834-860bf5b4e927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.790 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fff204c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:14 localhost kernel: device tap4fff204c-10 left promiscuous mode Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.796 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.801 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c18b3f-1935-462c-80df-8cda05b3a379]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost nova_compute[297021]: 2025-10-05 10:02:14.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.814 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[51576709-e5e9-49b0-ad2a-dc8b8dcb50cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.815 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[863cb6ca-9f89-4e92-88b5-3badc72bc8dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.828 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebb345f-b2e2-4cff-a905-18d9ae810391]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1200814, 'reachable_time': 23883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324157, 'error': None, 'target': 'ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.830 163645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4fff204c-1c8f-4762-a2e1-f1d5d5f3fe05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.830 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[7443a811-69f2-4f94-b717-41cf3dac9a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.831 163434 INFO neutron.agent.ovn.metadata.agent [-] Port b5fb5f36-c849-4f81-ab48-bd2c70c82f8f in datapath 3b3799eb-b69f-487f-9d2e-8e9111478409 unbound from our chassis#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.834 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 81af6cfe-92b4-40c2-b804-bc10fcb3b505 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.834 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b3799eb-b69f-487f-9d2e-8e9111478409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.835 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[51ab59f8-6ea7-49fc-b3c8-8299c1a03345]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:14.835 163434 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409 namespace which is not needed anymore#033[00m Oct 5 06:02:14 localhost dnsmasq[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/addn_hosts - 1 addresses Oct 5 06:02:14 localhost dnsmasq-dhcp[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/host Oct 5 06:02:14 localhost podman[324184]: 2025-10-05 10:02:14.960227652 +0000 UTC m=+0.049238100 container kill ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:14 localhost dnsmasq-dhcp[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/opts Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [NOTICE] (323869) : haproxy version is 2.8.14-c23fe91 Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [NOTICE] (323869) : path to executable is /usr/sbin/haproxy Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [WARNING] (323869) : Exiting Master process... Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [WARNING] (323869) : Exiting Master process... Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [ALERT] (323869) : Current worker (323873) exited with code 143 (Terminated) Oct 5 06:02:14 localhost neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409[323863]: [WARNING] (323869) : All workers exited. Exiting... (0) Oct 5 06:02:14 localhost systemd[1]: libpod-026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f.scope: Deactivated successfully. Oct 5 06:02:15 localhost podman[324196]: 2025-10-05 10:02:15.002871971 +0000 UTC m=+0.068168654 container died 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:15 localhost podman[324196]: 2025-10-05 10:02:15.040173464 +0000 UTC m=+0.105470127 container cleanup 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:02:15 localhost podman[324218]: 2025-10-05 10:02:15.073761017 +0000 UTC m=+0.067926047 container cleanup 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:02:15 localhost systemd[1]: libpod-conmon-026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f.scope: Deactivated successfully. Oct 5 06:02:15 localhost podman[324235]: 2025-10-05 10:02:15.116928031 +0000 UTC m=+0.054055380 container remove 026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.121 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[506b66ad-efb5-4862-8ccf-258bac5a0d84]: (4, ('Sun Oct 5 10:02:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409 (026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f)\n026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f\nSun Oct 5 10:02:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409 (026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f)\n026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.123 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[2b74738d-2d5c-4e48-9ee6-5700b6ae5bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.123 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b3799eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:15 localhost kernel: device tap3b3799eb-b0 left promiscuous mode Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.133 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[c70b2e36-3d67-4a67-8e08-06e10dd709eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.150 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[5005d876-8d1e-43bf-bec8-fad7ded35ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.151 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[70f028ac-1310-4e2c-8b92-e6330afa74cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:15.165 272040 INFO neutron.agent.dhcp.agent [None req-4b8bd43c-e00d-4709-91a1-9fb0e04eda0f - - - - - -] DHCP configuration for ports {'3fa04c44-9142-4d6c-991f-aca11ea8e8ee'} is completed#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.170 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a04b06-7b13-4bab-85eb-f9dc44a6e192]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1200904, 'reachable_time': 25251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324258, 'error': None, 'target': 'ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.172 163645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b3799eb-b69f-487f-9d2e-8e9111478409 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:02:15 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:15.172 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[09e1b4ad-a0a5-4104-86ad-112c2d51bd9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:15 localhost systemd[1]: var-lib-containers-storage-overlay-aecbab81b9a5111fc706542c059fb9cb0af6337cd044a34c19210b77d36c1854-merged.mount: Deactivated successfully. Oct 5 06:02:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-026fb3eb37cee6e6fe3222ab596e686a73cd65cb1215046676f4e08efb37cd2f-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:15 localhost systemd[1]: run-netns-ovnmeta\x2d3b3799eb\x2db69f\x2d487f\x2d9d2e\x2d8e9111478409.mount: Deactivated successfully. Oct 5 06:02:15 localhost systemd[1]: var-lib-containers-storage-overlay-2dc23af1704a6a606d22b280593f019bc5a95b2639f7602ac6b689fe4d0392aa-merged.mount: Deactivated successfully. Oct 5 06:02:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3b669ce8ed3a22437205e0f00979c2a546a0161d7c77c2c8fd156cd91e8b83d-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:15 localhost systemd[1]: run-netns-ovnmeta\x2d4fff204c\x2d1c8f\x2d4762\x2da2e1\x2df1d5d5f3fe05.mount: Deactivated successfully. Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.269 2 INFO nova.virt.libvirt.driver [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Deleting instance files /var/lib/nova/instances/fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d_del#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.270 2 INFO nova.virt.libvirt.driver [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Deletion of /var/lib/nova/instances/fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d_del complete#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.348 2 INFO nova.compute.manager [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.349 2 DEBUG oslo.service.loopingcall [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.350 2 DEBUG nova.compute.manager [-] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.350 2 DEBUG nova.network.neutron [-] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.464 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.464 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:02:15 localhost nova_compute[297021]: 2025-10-05 10:02:15.480 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.301 2 DEBUG nova.network.neutron [-] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.334 2 INFO nova.compute.manager [-] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Took 0.98 seconds to deallocate network for instance.#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.399 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.400 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.437 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e90 do_prune osdmap full prune enabled Oct 5 06:02:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e91 e91: 6 total, 6 up, 6 in Oct 5 06:02:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e91: 6 total, 6 up, 6 in Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.488 2 DEBUG oslo_concurrency.processutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.661 2 DEBUG nova.compute.manager [req-eac40d64-c1be-42ef-a728-77f960592def req-fbcf262b-9e06-4fac-b0e4-e32258aa427d 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Received event network-vif-plugged-639ec525-18dc-48cb-9254-618d9c9ff42f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.662 2 DEBUG oslo_concurrency.lockutils [req-eac40d64-c1be-42ef-a728-77f960592def req-fbcf262b-9e06-4fac-b0e4-e32258aa427d 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.662 2 DEBUG oslo_concurrency.lockutils [req-eac40d64-c1be-42ef-a728-77f960592def req-fbcf262b-9e06-4fac-b0e4-e32258aa427d 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.663 2 DEBUG oslo_concurrency.lockutils [req-eac40d64-c1be-42ef-a728-77f960592def req-fbcf262b-9e06-4fac-b0e4-e32258aa427d 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.663 2 DEBUG nova.compute.manager [req-eac40d64-c1be-42ef-a728-77f960592def req-fbcf262b-9e06-4fac-b0e4-e32258aa427d 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] No waiting events found dispatching network-vif-plugged-639ec525-18dc-48cb-9254-618d9c9ff42f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.664 2 WARNING nova.compute.manager [req-eac40d64-c1be-42ef-a728-77f960592def req-fbcf262b-9e06-4fac-b0e4-e32258aa427d 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Received unexpected event network-vif-plugged-639ec525-18dc-48cb-9254-618d9c9ff42f for instance with vm_state deleted and task_state None.#033[00m Oct 5 06:02:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3513435810' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.931 2 DEBUG oslo_concurrency.processutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.939 2 DEBUG nova.compute.provider_tree [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.955 2 DEBUG nova.scheduler.client.report [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:02:16 localhost nova_compute[297021]: 2025-10-05 10:02:16.978 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:17 localhost nova_compute[297021]: 2025-10-05 10:02:17.028 2 INFO nova.scheduler.client.report [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Deleted allocations for instance fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d#033[00m Oct 5 06:02:17 localhost nova_compute[297021]: 2025-10-05 10:02:17.094 2 DEBUG oslo_concurrency.lockutils [None req-594d5977-17ac-44cc-998f-a3e3816a5934 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Lock "fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:02:17 localhost podman[324282]: 2025-10-05 10:02:17.679588669 +0000 UTC m=+0.083495881 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:02:17 localhost podman[324282]: 2025-10-05 10:02:17.725813815 +0000 UTC m=+0.129720947 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:02:17 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:02:17 localhost podman[324281]: 2025-10-05 10:02:17.726874913 +0000 UTC m=+0.134509137 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 06:02:17 localhost podman[324281]: 2025-10-05 10:02:17.813972191 +0000 UTC m=+0.221606405 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:02:17 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:02:18 localhost systemd[1]: Stopping User Manager for UID 42436... Oct 5 06:02:18 localhost systemd[323559]: Activating special unit Exit the Session... Oct 5 06:02:18 localhost systemd[323559]: Stopped target Main User Target. Oct 5 06:02:18 localhost systemd[323559]: Stopped target Basic System. Oct 5 06:02:18 localhost systemd[323559]: Stopped target Paths. Oct 5 06:02:18 localhost systemd[323559]: Stopped target Sockets. Oct 5 06:02:18 localhost systemd[323559]: Stopped target Timers. Oct 5 06:02:18 localhost systemd[323559]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 06:02:18 localhost systemd[323559]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 06:02:18 localhost systemd[323559]: Closed D-Bus User Message Bus Socket. Oct 5 06:02:18 localhost systemd[323559]: Stopped Create User's Volatile Files and Directories. Oct 5 06:02:18 localhost systemd[323559]: Removed slice User Application Slice. Oct 5 06:02:18 localhost systemd[323559]: Reached target Shutdown. Oct 5 06:02:18 localhost systemd[323559]: Finished Exit the Session. Oct 5 06:02:18 localhost systemd[323559]: Reached target Exit the Session. Oct 5 06:02:18 localhost systemd[1]: user@42436.service: Deactivated successfully. Oct 5 06:02:18 localhost systemd[1]: Stopped User Manager for UID 42436. Oct 5 06:02:18 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Oct 5 06:02:18 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Oct 5 06:02:18 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Oct 5 06:02:18 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Oct 5 06:02:18 localhost systemd[1]: Removed slice User Slice of UID 42436. Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.599 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.600 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.600 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:02:18 localhost nova_compute[297021]: 2025-10-05 10:02:18.601 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:02:19 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:19.099 2 INFO neutron.agent.securitygroups_rpc [None req-752332cd-f9cf-4a7f-a2cb-f938e53511d4 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:02:19 localhost dnsmasq[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/addn_hosts - 0 addresses Oct 5 06:02:19 localhost dnsmasq-dhcp[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/host Oct 5 06:02:19 localhost podman[324337]: 2025-10-05 10:02:19.335608422 +0000 UTC m=+0.061058791 container kill 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:02:19 localhost dnsmasq-dhcp[323164]: read /var/lib/neutron/dhcp/3b3799eb-b69f-487f-9d2e-8e9111478409/opts Oct 5 06:02:19 localhost nova_compute[297021]: 2025-10-05 10:02:19.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:19 localhost nova_compute[297021]: 2025-10-05 10:02:19.712 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:19 localhost nova_compute[297021]: 2025-10-05 10:02:19.735 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:19 localhost nova_compute[297021]: 2025-10-05 10:02:19.736 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:02:19 localhost systemd[1]: tmp-crun.ACUBMo.mount: Deactivated successfully. Oct 5 06:02:19 localhost dnsmasq[323164]: exiting on receipt of SIGTERM Oct 5 06:02:19 localhost podman[324375]: 2025-10-05 10:02:19.760472899 +0000 UTC m=+0.075201544 container kill 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:02:19 localhost systemd[1]: libpod-4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648.scope: Deactivated successfully. Oct 5 06:02:19 localhost podman[324390]: 2025-10-05 10:02:19.829915197 +0000 UTC m=+0.058934743 container died 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:02:19 localhost podman[324390]: 2025-10-05 10:02:19.85649969 +0000 UTC m=+0.085519156 container cleanup 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:02:19 localhost systemd[1]: libpod-conmon-4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648.scope: Deactivated successfully. Oct 5 06:02:19 localhost podman[324397]: 2025-10-05 10:02:19.909583173 +0000 UTC m=+0.126708346 container remove 4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b3799eb-b69f-487f-9d2e-8e9111478409, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:02:19 localhost ovn_controller[157794]: 2025-10-05T10:02:19Z|00110|binding|INFO|Releasing lport e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 from this chassis (sb_readonly=0) Oct 5 06:02:19 localhost ovn_controller[157794]: 2025-10-05T10:02:19Z|00111|binding|INFO|Setting lport e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 down in Southbound Oct 5 06:02:19 localhost kernel: device tape627cbb3-47 left promiscuous mode Oct 5 06:02:19 localhost nova_compute[297021]: 2025-10-05 10:02:19.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:19.928 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3b3799eb-b69f-487f-9d2e-8e9111478409', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b3799eb-b69f-487f-9d2e-8e9111478409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca79c6dd41f44883b5382141d131a288', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd2bc314-ac1e-47fb-a371-837692084a56, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e627cbb3-4742-4b6f-9bf0-18e2b7cb4597) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:19.930 163434 INFO neutron.agent.ovn.metadata.agent [-] Port e627cbb3-4742-4b6f-9bf0-18e2b7cb4597 in datapath 3b3799eb-b69f-487f-9d2e-8e9111478409 unbound from our chassis#033[00m Oct 5 06:02:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:19.934 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b3799eb-b69f-487f-9d2e-8e9111478409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:19.935 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3b584c48-12a1-41ae-ad9c-72b5e23a2cef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:19 localhost nova_compute[297021]: 2025-10-05 10:02:19.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:19 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:19.997 272040 INFO neutron.agent.dhcp.agent [None req-7f627198-6c76-4b1d-aadb-ff050cdfb79d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:20 localhost systemd[1]: var-lib-containers-storage-overlay-a51a5b07ff643e8a7fbfe1dd21a1aa57f0e7c1b521c71aad4963b3b96f2c8fbd-merged.mount: Deactivated successfully. Oct 5 06:02:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4400a12184a58f6fe4374ebb2f6d2068e1b2d9ebb46a2aad22e97b1869bbd648-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:20 localhost systemd[1]: run-netns-qdhcp\x2d3b3799eb\x2db69f\x2d487f\x2d9d2e\x2d8e9111478409.mount: Deactivated successfully. Oct 5 06:02:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:20.461 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:20.464 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:20.464 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:20.465 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:20 localhost nova_compute[297021]: 2025-10-05 10:02:20.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:20 localhost ovn_controller[157794]: 2025-10-05T10:02:20Z|00112|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:20 localhost nova_compute[297021]: 2025-10-05 10:02:20.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:20 localhost nova_compute[297021]: 2025-10-05 10:02:20.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:21 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:21.241 2 INFO neutron.agent.securitygroups_rpc [None req-ca79af68-b363-4752-a92a-9b5fd7e9378c 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:02:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e91 do_prune osdmap full prune enabled Oct 5 06:02:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 e92: 6 total, 6 up, 6 in Oct 5 06:02:21 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in Oct 5 06:02:21 localhost podman[248506]: time="2025-10-05T10:02:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:02:21 localhost podman[248506]: @ - - [05/Oct/2025:10:02:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149324 "" "Go-http-client/1.1" Oct 5 06:02:21 localhost podman[248506]: @ - - [05/Oct/2025:10:02:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20319 "" "Go-http-client/1.1" Oct 5 06:02:22 localhost openstack_network_exporter[250601]: ERROR 10:02:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:02:22 localhost openstack_network_exporter[250601]: ERROR 10:02:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:22 localhost openstack_network_exporter[250601]: ERROR 10:02:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:22 localhost openstack_network_exporter[250601]: ERROR 10:02:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:02:22 localhost openstack_network_exporter[250601]: Oct 5 06:02:22 localhost openstack_network_exporter[250601]: ERROR 10:02:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:02:22 localhost openstack_network_exporter[250601]: Oct 5 06:02:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:02:22 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:02:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:23.026 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005471152.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:10Z, description=, device_id=b1dce7a2-b06b-4cdb-b072-ccd123742ded, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-liveautoblockmigrationv225test-server-2001023684, extra_dhcp_opts=[], fixed_ips=[], id=1374da87-a9a5-4840-80a7-197494b76131, ip_allocation=immediate, mac_address=fa:16:3e:4b:06:97, name=tempest-parent-738433439, network_id=9493e121-6caf-4009-9106-31c87685c480, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149'], standard_attr_id=485, status=DOWN, tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, trunk_details=sub_ports=[], trunk_id=b550f6bc-4b02-45ea-9fde-d1fa93bf86e6, updated_at=2025-10-05T10:02:22Z on network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:02:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:23.270 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:23.271 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:02:23 localhost podman[324525]: 2025-10-05 10:02:23.275031691 +0000 UTC m=+0.062272234 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:23 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 2 addresses Oct 5 06:02:23 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:02:23 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:02:23 localhost systemd[1]: tmp-crun.ZotgQE.mount: Deactivated successfully. Oct 5 06:02:23 localhost nova_compute[297021]: 2025-10-05 10:02:23.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:23 localhost ovn_controller[157794]: 2025-10-05T10:02:23Z|00113|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:23 localhost nova_compute[297021]: 2025-10-05 10:02:23.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:23 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:02:23 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:02:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:23.565 272040 INFO neutron.agent.dhcp.agent [None req-c0f71941-8f46-41f5-80c6-dd188f559bb6 - - - - - -] DHCP configuration for ports {'1374da87-a9a5-4840-80a7-197494b76131'} is completed#033[00m Oct 5 06:02:23 localhost nova_compute[297021]: 2025-10-05 10:02:23.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:24 localhost nova_compute[297021]: 2025-10-05 10:02:24.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:25 localhost ovn_controller[157794]: 2025-10-05T10:02:25Z|00114|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:25 localhost nova_compute[297021]: 2025-10-05 10:02:25.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:02:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:02:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4153189591' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:02:25 localhost podman[324546]: 2025-10-05 10:02:25.684040221 +0000 UTC m=+0.088531288 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:02:25 localhost podman[324546]: 2025-10-05 10:02:25.718054086 +0000 UTC m=+0.122545123 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:25 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:02:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:02:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:02:26 localhost nova_compute[297021]: 2025-10-05 10:02:26.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:27 localhost nova_compute[297021]: 2025-10-05 10:02:27.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:02:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:02:28 localhost nova_compute[297021]: 2025-10-05 10:02:28.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:28 localhost podman[324564]: 2025-10-05 10:02:28.677064675 +0000 UTC m=+0.088893866 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:02:28 localhost podman[324564]: 2025-10-05 10:02:28.742875735 +0000 UTC m=+0.154704896 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:02:28 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:02:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:29.273 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:29 localhost nova_compute[297021]: 2025-10-05 10:02:29.586 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:29 localhost nova_compute[297021]: 2025-10-05 10:02:29.587 2 INFO nova.compute.manager [-] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] VM Stopped (Lifecycle Event)#033[00m Oct 5 06:02:29 localhost nova_compute[297021]: 2025-10-05 10:02:29.609 2 DEBUG nova.compute.manager [None req-60e3d98b-29e6-4c64-af2c-c936ba6e5f28 - - - - - -] [instance: fd3b3e5e-8ab6-4dca-b3d1-a36aacb8757d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:29 localhost nova_compute[297021]: 2025-10-05 10:02:29.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:02:31 localhost podman[324589]: 2025-10-05 10:02:31.677692907 +0000 UTC m=+0.085084014 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:02:31 localhost podman[324589]: 2025-10-05 10:02:31.688514471 +0000 UTC m=+0.095905558 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:02:31 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:02:33 localhost snmpd[68045]: empty variable list in _query Oct 5 06:02:33 localhost snmpd[68045]: empty variable list in _query Oct 5 06:02:33 localhost snmpd[68045]: empty variable list in _query Oct 5 06:02:33 localhost ovn_controller[157794]: 2025-10-05T10:02:33Z|00115|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:33 localhost nova_compute[297021]: 2025-10-05 10:02:33.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:02:33 localhost nova_compute[297021]: 2025-10-05 10:02:33.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:33 localhost podman[324607]: 2025-10-05 10:02:33.70016312 +0000 UTC m=+0.107893873 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350) Oct 5 06:02:33 localhost podman[324607]: 2025-10-05 10:02:33.715475917 +0000 UTC m=+0.123206660 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:02:33 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:02:34 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:34.169 272040 INFO neutron.agent.linux.ip_lib [None req-a8a0d97f-3ffb-46e2-8f58-ca24ac923b50 - - - - - -] Device tap5541c352-d8 cannot be used as it has no MAC address#033[00m Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:34 localhost kernel: device tap5541c352-d8 entered promiscuous mode Oct 5 06:02:34 localhost ovn_controller[157794]: 2025-10-05T10:02:34Z|00116|binding|INFO|Claiming lport 5541c352-d8e1-4fa6-9cbe-7297a76a9005 for this chassis. Oct 5 06:02:34 localhost ovn_controller[157794]: 2025-10-05T10:02:34Z|00117|binding|INFO|5541c352-d8e1-4fa6-9cbe-7297a76a9005: Claiming unknown Oct 5 06:02:34 localhost NetworkManager[5981]: [1759658554.2037] manager: (tap5541c352-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:34 localhost systemd-udevd[324637]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:34.213 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10f9adaef10b420fadc2449804b80832', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbfa472a-c33c-4cb5-b1c8-ba60d8f673f9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5541c352-d8e1-4fa6-9cbe-7297a76a9005) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:34 localhost ovn_controller[157794]: 2025-10-05T10:02:34Z|00118|binding|INFO|Setting lport 5541c352-d8e1-4fa6-9cbe-7297a76a9005 ovn-installed in OVS Oct 5 06:02:34 localhost ovn_controller[157794]: 2025-10-05T10:02:34Z|00119|binding|INFO|Setting lport 5541c352-d8e1-4fa6-9cbe-7297a76a9005 up in Southbound Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:34.216 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 5541c352-d8e1-4fa6-9cbe-7297a76a9005 in datapath f019caf2-f140-40b3-a7d1-19d0fd0e8a5e bound to our chassis#033[00m Oct 5 06:02:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:34.218 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f019caf2-f140-40b3-a7d1-19d0fd0e8a5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:02:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:34.219 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[01746cf6-5954-4914-9921-cc9824975b3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:34 localhost nova_compute[297021]: 2025-10-05 10:02:34.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:35 localhost podman[324691]: Oct 5 06:02:35 localhost podman[324691]: 2025-10-05 10:02:35.083989405 +0000 UTC m=+0.085994938 container create 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:35 localhost systemd[1]: Started libpod-conmon-49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae.scope. Oct 5 06:02:35 localhost systemd[1]: tmp-crun.0FcVeG.mount: Deactivated successfully. Oct 5 06:02:35 localhost podman[324691]: 2025-10-05 10:02:35.04335058 +0000 UTC m=+0.045356103 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:02:35 localhost systemd[1]: Started libcrun container. Oct 5 06:02:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bfbfa1ede400325cfdfe592371797e3059bebaee27167f4149e791136f9c9cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:35 localhost podman[324691]: 2025-10-05 10:02:35.169420437 +0000 UTC m=+0.171425930 container init 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:02:35 localhost podman[324691]: 2025-10-05 10:02:35.182660877 +0000 UTC m=+0.184666370 container start 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:02:35 localhost dnsmasq[324710]: started, version 2.85 cachesize 150 Oct 5 06:02:35 localhost dnsmasq[324710]: DNS service limited to local subnets Oct 5 06:02:35 localhost dnsmasq[324710]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:02:35 localhost dnsmasq[324710]: warning: no upstream servers configured Oct 5 06:02:35 localhost dnsmasq-dhcp[324710]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:02:35 localhost dnsmasq[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/addn_hosts - 0 addresses Oct 5 06:02:35 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/host Oct 5 06:02:35 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/opts Oct 5 06:02:35 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:35.382 272040 INFO neutron.agent.dhcp.agent [None req-fb540a95-220e-42b8-a11e-0183da47f31c - - - - - -] DHCP configuration for ports {'04d02e2f-5716-4d51-99a0-027b9f1e1a03'} is completed#033[00m Oct 5 06:02:35 localhost ovn_controller[157794]: 2025-10-05T10:02:35Z|00120|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:35 localhost nova_compute[297021]: 2025-10-05 10:02:35.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:36 localhost systemd[1]: tmp-crun.M1Cfuy.mount: Deactivated successfully. Oct 5 06:02:36 localhost ovn_controller[157794]: 2025-10-05T10:02:36Z|00121|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:36 localhost nova_compute[297021]: 2025-10-05 10:02:36.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:36 localhost nova_compute[297021]: 2025-10-05 10:02:36.907 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Creating tmpfile /var/lib/nova/instances/tmprnxabv3g to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Oct 5 06:02:36 localhost nova_compute[297021]: 2025-10-05 10:02:36.909 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Oct 5 06:02:37 localhost nova_compute[297021]: 2025-10-05 10:02:37.849 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b1dce7a2-b06b-4cdb-b072-ccd123742ded',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Oct 5 06:02:37 localhost nova_compute[297021]: 2025-10-05 10:02:37.882 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:37 localhost nova_compute[297021]: 2025-10-05 10:02:37.883 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquired lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:37 localhost nova_compute[297021]: 2025-10-05 10:02:37.883 2 DEBUG nova.network.neutron [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:38 localhost podman[324711]: 2025-10-05 10:02:38.677322227 +0000 UTC m=+0.087855559 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:02:38 localhost podman[324711]: 2025-10-05 10:02:38.69105234 +0000 UTC m=+0.101585662 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:02:38 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.711 2 DEBUG nova.network.neutron [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating instance_info_cache with network_info: [{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.743 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Releasing lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.746 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b1dce7a2-b06b-4cdb-b072-ccd123742ded',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.747 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Creating instance directory: /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.747 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Ensure instance console log exists: /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.748 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.750 2 DEBUG nova.virt.libvirt.vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-05T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2001023684',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2001023684',id=7,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T10:02:28Z,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b069d6351214d1baf4ff391a6512beb',ramdisk_id='',reservation_id='r-k8v41bv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1030348059',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:02:28Z,user_data=None,user_id='b56f1071781246a68c1693519a9cd054',uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.750 2 DEBUG nova.network.os_vif_util [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Converting VIF {"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.752 2 DEBUG nova.network.os_vif_util [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.752 2 DEBUG os_vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1374da87-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1374da87-a9, col_values=(('external_ids', {'iface-id': '1374da87-a9a5-4840-80a7-197494b76131', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:06:97', 'vm-uuid': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.776 2 INFO os_vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9')#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.777 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Oct 5 06:02:38 localhost nova_compute[297021]: 2025-10-05 10:02:38.777 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b1dce7a2-b06b-4cdb-b072-ccd123742ded',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.837 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.838 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.845 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '614792c1-4e91-4db0-bc1f-a8876718f7c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.838984', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cc3bec8-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': '341ab302da132673cd95eb508d0efb00d9a0f5ebab2eb073080dfe0b6b4d403b'}]}, 'timestamp': '2025-10-05 10:02:38.846039', '_unique_id': 'fd4e3e6f4e534834ac823fdd8fe9c34b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.847 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.849 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.872 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '29608f23-abfd-4bae-a1af-f73b1ac2a013', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.849677', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6cc804c4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '3fb2031423398c258c27f43384f305686e0ecccf2f46bf77d81f61ba0e7c69a9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.849677', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cc81cde-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '0bccf6a627bd76887e7af58dd656419a8c2cd312886c3464e007febce061c748'}]}, 'timestamp': '2025-10-05 10:02:38.874674', '_unique_id': '63e4418362ac4232b1939b2bd50bb9f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.877 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.878 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '882e15de-33fc-4119-90d5-a489bd23f32e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.877646', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6cc8ad3e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': 'd7838be5329875b835f34da1ec397875c2e6d9db70136b94cf760a0afd9d63c3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.877646', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cc8c792-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '3bf11399802974d0900267ecf604c0468492f36c73986388b46b8ecd46227ebd'}]}, 'timestamp': '2025-10-05 10:02:38.879027', '_unique_id': '51ebf728bffb4324adedf11a5436c571'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.880 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.882 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.882 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '31fc47e9-20ff-41eb-9280-66dc735ba18f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.882506', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cc96a44-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': '5bd9537fd15fb6fec615ae79667b6b59295f278f02abc2936dd4f911ec5abd38'}]}, 'timestamp': '2025-10-05 10:02:38.883229', '_unique_id': 'fdb2f8975e4a4a9fbb2e5d690994ee97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.884 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.886 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.899 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.899 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '63ce3deb-6ff2-4152-a088-2345bf94ed48', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.886628', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6ccbf5a2-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.110658852, 'message_signature': '58300290067961e8dfb044aaf2d415367cddf70f0c2f82c1e0370fbceb284df8'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.886628', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6ccc0a06-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.110658852, 'message_signature': 'fe0d8b4a94a7263f4fc8ed9218d758c954b324e58f3b80d8880064afb3343970'}]}, 'timestamp': '2025-10-05 10:02:38.900336', '_unique_id': 'e1d6758ac738489bab04094e6bcc1aaf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.901 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.903 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.903 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d12ed14-9366-4af8-8ef9-e71e7c9d0b6b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.903457', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6ccc9f34-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': 'b0f7eee08e9d7cd195ad44640f41a3cd400499b05b660d9f11e7f5ec252e8b16'}]}, 'timestamp': '2025-10-05 10:02:38.904243', '_unique_id': '2a8800453a204982ac2c5d9a5a1ed7fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.907 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.907 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.908 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bd600f7f-773c-459e-a04f-247acbc534e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.907636', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6ccd42c2-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.110658852, 'message_signature': 'fffed78b45aa8aade544c8f90ac048f2486ecb62cc9d966fea2dae946f270169'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.907636', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6ccd5e6a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.110658852, 'message_signature': '31e8a473e33f8f5d01bdaa0288bafbdca555174b77b971a6cf5d7059bbf54996'}]}, 'timestamp': '2025-10-05 10:02:38.909158', '_unique_id': '4f90e9b16fcc415093f5622ef9259ac0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.912 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.913 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '80c225bb-56c4-4e46-b488-5499be6f9728', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.912577', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6cce018a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '0930b9ea4f6946a74f00cd26232c90e86d8f683ceaa10a7e6ace4f173cdba369'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.912577', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cce1b84-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': 'ea06c38e03b4e551e1af9ffe314c2a0df00e9ae03ace241295eb41893902563b'}]}, 'timestamp': '2025-10-05 10:02:38.913945', '_unique_id': 'e4e11c42a71f48f487bf96b899942581'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.917 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.918 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '119fe7b0-cc49-4d88-a9b9-a298b3f595ff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.917309', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6ccebae4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.110658852, 'message_signature': 'b8f74b1d3f4f60a42fb81a261554e3901ca8927f56bd7c44e71de9974ee9f3ad'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.917309', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cced740-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.110658852, 'message_signature': '18c3e76f0d40aa1fdec59e42e893e3b893d34ab4317342824fa89c8409bb11f0'}]}, 'timestamp': '2025-10-05 10:02:38.918752', '_unique_id': 'ae3f0e907d52495195d5f6a5ba7cbb1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.921 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.936 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 15630000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bd6d8bf9-b3a7-4818-9d96-6cc6e7e1c904', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15630000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:02:38.921915', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '6cd1aab0-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.160586039, 'message_signature': '09c9f181eb5a2ccf0a5955e1cf462fe454a70f2aa993f5244430ce70afaca7bc'}]}, 'timestamp': '2025-10-05 10:02:38.937151', '_unique_id': '2bbaa31a71394b74a1d4be3b6f6a81e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.937 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.938 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b5dfa139-2f2d-4a94-85c1-ce6004d35154', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:02:38.938675', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '6cd1f628-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.160586039, 'message_signature': 'f71afa349cceb7702f7d65d72e45367d97e156876c5a99e676814fba509a6891'}]}, 'timestamp': '2025-10-05 10:02:38.939086', '_unique_id': 'debde82565324d1785c275e2985335e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.940 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5cbb9739-e411-4b86-add3-0bd8f9875fa0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.941261', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6cd25c44-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '419fbe9d9bf5f0cf8b315577dbdab1dd792072a40bf4e355047d7dc396a55090'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.941261', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cd26b9e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': 'e037a672da4a0fad9fd20b4587c8739e28658d302659cf9f2411b47d90bd6e9d'}]}, 'timestamp': '2025-10-05 10:02:38.942090', '_unique_id': 'ca0394f8c0aa4e11b85da5eeed6a630d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.944 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c73be22c-523a-4a3f-9c6a-51f9b09c6bfd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.944231', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd2d20a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': '57c6b4131f423447ce6bf9de09dad3758de546208551b2e4b942224fb7ff8b81'}]}, 'timestamp': '2025-10-05 10:02:38.944755', '_unique_id': '57c2de7af4b745d8a087e0ad821b1d98'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.946 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bc12992f-f2f0-4f3d-b157-225c5006df06', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.946864', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd3363c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': '755c04cccf7bac1154b4c86e136ed4f6db9a6c76fa5ef0eceb9a1ddebc5565cd'}]}, 'timestamp': '2025-10-05 10:02:38.947295', '_unique_id': '4c04bd2f0397417b86b2f268f875e3b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.949 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.949 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d2d7ed3-fb2e-493c-898d-958d192ef278', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.949289', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd395fa-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': 'fc6c3d23bc1565fc566dd4bc343a263bc9f43729b00b0bc445ed532bd8f8c555'}]}, 'timestamp': '2025-10-05 10:02:38.949744', '_unique_id': 'e6f5068008c549ab8466609e77a51d13'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe492a8c-a686-4700-adf4-73ee0ac095cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.951781', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd3f630-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': '6df2d6d6e26275388dbbb80659cee6715749ae75e56517b8395d79f111b6a4c1'}]}, 'timestamp': '2025-10-05 10:02:38.952210', '_unique_id': '4b8c2906bca6463694ab0a3d23275289'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c7384225-e132-4476-b518-25cab867fb1b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.954158', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6cd452ec-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '30be68eed5be968b852cc13d45c9eab810dae36c9ebb45ae4b8f4625e3274002'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.954158', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cd46304-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '99a562a634751d1e2cead615f21f759c4d8492296be496a7efea6d9a11d8a1c3'}]}, 'timestamp': '2025-10-05 10:02:38.954977', '_unique_id': '79844fc378b5411f8625bb145a98176f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.955 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57053a7d-1df2-4b81-a2a2-c3453dca5aee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.956961', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd4c09c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': 'd2b5854c200e28eb7b925a670289037c2893dee7ea0a6db0ef21019c90ada102'}]}, 'timestamp': '2025-10-05 10:02:38.957389', '_unique_id': 'edb0cf69130e47089e9052b21a2a585b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.959 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e4ad3817-920f-4a42-a672-634da3080080', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:02:38.959567', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6cd52654-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': 'cb0c67f317bc6539a4185ee55d5a4c70f5a90434cc971ae9ebddc2442bcbd161'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:02:38.959567', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6cd5359a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.073631025, 'message_signature': '644e3740d7541e2dfa6f2c8ceed7ba289476424090ed7154d854b13cb9fc1abb'}]}, 'timestamp': '2025-10-05 10:02:38.960369', '_unique_id': '666adb7e39484263b7dcfe572361bc0b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.962 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.962 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dac06ae9-59ca-40f9-8148-98560558041e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.962473', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd597f6-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': 'be3fa624883040045c3715fe04e6f66dd6656f370f47b9c1747d2b58734416ff'}]}, 'timestamp': '2025-10-05 10:02:38.962902', '_unique_id': '34af61ab3d414d6f8e4c1a6e03d17598'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.963 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.964 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.964 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '52e1ac58-fbf7-4fbe-a78e-835a662bff48', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:02:38.964871', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6cd5f430-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12038.062894863, 'message_signature': '5b5606e7352f5b40a7f88253f2deece5fc77696c026026c8bf25dcd14da4a703'}]}, 'timestamp': '2025-10-05 10:02:38.965195', '_unique_id': '3c29aff252534c2693bf4436234f634b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.965 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:02:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:02:38.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:02:39 localhost nova_compute[297021]: 2025-10-05 10:02:39.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:40.398 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:40Z, description=, device_id=b1ce920c-beca-46e2-9ac1-6e2a09f8eaa6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80ac185f-decf-4b22-b82f-c32355c9fc3d, ip_allocation=immediate, mac_address=fa:16:3e:53:47:c8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:02:31Z, description=, dns_domain=, id=f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-1435170142-network, port_security_enabled=True, project_id=10f9adaef10b420fadc2449804b80832, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62778, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=569, status=ACTIVE, subnets=['f59f6d85-cc60-4e11-9820-ff7489b07b8c'], tags=[], tenant_id=10f9adaef10b420fadc2449804b80832, updated_at=2025-10-05T10:02:33Z, vlan_transparent=None, network_id=f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, port_security_enabled=False, project_id=10f9adaef10b420fadc2449804b80832, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=591, status=DOWN, tags=[], tenant_id=10f9adaef10b420fadc2449804b80832, updated_at=2025-10-05T10:02:40Z on network f019caf2-f140-40b3-a7d1-19d0fd0e8a5e#033[00m Oct 5 06:02:40 localhost podman[324751]: 2025-10-05 10:02:40.645111985 +0000 UTC m=+0.072244095 container kill 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:02:40 localhost dnsmasq[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/addn_hosts - 1 addresses Oct 5 06:02:40 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/host Oct 5 06:02:40 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/opts Oct 5 06:02:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:40.856 272040 INFO neutron.agent.dhcp.agent [None req-33069ec1-4590-434e-af6e-5a7000330c2b - - - - - -] DHCP configuration for ports {'80ac185f-decf-4b22-b82f-c32355c9fc3d'} is completed#033[00m Oct 5 06:02:40 localhost nova_compute[297021]: 2025-10-05 10:02:40.924 2 DEBUG nova.network.neutron [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Port 1374da87-a9a5-4840-80a7-197494b76131 updated with migration profile {'migrating_to': 'np0005471150.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Oct 5 06:02:40 localhost nova_compute[297021]: 2025-10-05 10:02:40.925 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b1dce7a2-b06b-4cdb-b072-ccd123742ded',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Oct 5 06:02:41 localhost sshd[324772]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:02:41 localhost systemd[1]: Created slice User Slice of UID 42436. Oct 5 06:02:41 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Oct 5 06:02:41 localhost systemd-logind[760]: New session 78 of user nova. Oct 5 06:02:41 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Oct 5 06:02:41 localhost systemd[1]: Starting User Manager for UID 42436... Oct 5 06:02:41 localhost systemd[324776]: Queued start job for default target Main User Target. Oct 5 06:02:41 localhost systemd[324776]: Created slice User Application Slice. Oct 5 06:02:41 localhost systemd[324776]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 06:02:41 localhost systemd[324776]: Started Daily Cleanup of User's Temporary Directories. Oct 5 06:02:41 localhost systemd[324776]: Reached target Paths. Oct 5 06:02:41 localhost systemd[324776]: Reached target Timers. Oct 5 06:02:41 localhost systemd[324776]: Starting D-Bus User Message Bus Socket... Oct 5 06:02:41 localhost systemd[324776]: Starting Create User's Volatile Files and Directories... Oct 5 06:02:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:41 localhost systemd[324776]: Listening on D-Bus User Message Bus Socket. Oct 5 06:02:41 localhost systemd[324776]: Reached target Sockets. Oct 5 06:02:41 localhost systemd[324776]: Finished Create User's Volatile Files and Directories. Oct 5 06:02:41 localhost systemd[324776]: Reached target Basic System. Oct 5 06:02:41 localhost systemd[324776]: Reached target Main User Target. Oct 5 06:02:41 localhost systemd[324776]: Startup finished in 160ms. Oct 5 06:02:41 localhost systemd[1]: Started User Manager for UID 42436. Oct 5 06:02:41 localhost systemd[1]: Started Session 78 of User nova. Oct 5 06:02:41 localhost kernel: device tap1374da87-a9 entered promiscuous mode Oct 5 06:02:41 localhost ovn_controller[157794]: 2025-10-05T10:02:41Z|00122|binding|INFO|Claiming lport 1374da87-a9a5-4840-80a7-197494b76131 for this additional chassis. Oct 5 06:02:41 localhost ovn_controller[157794]: 2025-10-05T10:02:41Z|00123|binding|INFO|1374da87-a9a5-4840-80a7-197494b76131: Claiming fa:16:3e:4b:06:97 10.100.0.12 Oct 5 06:02:41 localhost ovn_controller[157794]: 2025-10-05T10:02:41Z|00124|binding|INFO|Claiming lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee for this additional chassis. Oct 5 06:02:41 localhost ovn_controller[157794]: 2025-10-05T10:02:41Z|00125|binding|INFO|3fa04c44-9142-4d6c-991f-aca11ea8e8ee: Claiming fa:16:3e:ce:90:0e 19.80.0.175 Oct 5 06:02:41 localhost nova_compute[297021]: 2025-10-05 10:02:41.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:41 localhost NetworkManager[5981]: [1759658561.6339] manager: (tap1374da87-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/26) Oct 5 06:02:41 localhost systemd-udevd[324805]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:41 localhost NetworkManager[5981]: [1759658561.6542] device (tap1374da87-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 06:02:41 localhost NetworkManager[5981]: [1759658561.6553] device (tap1374da87-a9): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 5 06:02:41 localhost ovn_controller[157794]: 2025-10-05T10:02:41Z|00126|binding|INFO|Setting lport 1374da87-a9a5-4840-80a7-197494b76131 ovn-installed in OVS Oct 5 06:02:41 localhost nova_compute[297021]: 2025-10-05 10:02:41.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:41 localhost systemd-machined[84982]: New machine qemu-4-instance-00000007. Oct 5 06:02:41 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000007. Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.349 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.351 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Started (Lifecycle Event)#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.369 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:42.436 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:40Z, description=, device_id=b1ce920c-beca-46e2-9ac1-6e2a09f8eaa6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80ac185f-decf-4b22-b82f-c32355c9fc3d, ip_allocation=immediate, mac_address=fa:16:3e:53:47:c8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:02:31Z, description=, dns_domain=, id=f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-1435170142-network, port_security_enabled=True, project_id=10f9adaef10b420fadc2449804b80832, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62778, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=569, status=ACTIVE, subnets=['f59f6d85-cc60-4e11-9820-ff7489b07b8c'], tags=[], tenant_id=10f9adaef10b420fadc2449804b80832, updated_at=2025-10-05T10:02:33Z, vlan_transparent=None, network_id=f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, port_security_enabled=False, project_id=10f9adaef10b420fadc2449804b80832, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=591, status=DOWN, tags=[], tenant_id=10f9adaef10b420fadc2449804b80832, updated_at=2025-10-05T10:02:40Z on network f019caf2-f140-40b3-a7d1-19d0fd0e8a5e#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.620 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.621 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Resumed (Lifecycle Event)#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.648 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.655 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:02:42 localhost nova_compute[297021]: 2025-10-05 10:02:42.687 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] During the sync_power process the instance has moved from host np0005471152.localdomain to host np0005471150.localdomain#033[00m Oct 5 06:02:42 localhost dnsmasq[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/addn_hosts - 1 addresses Oct 5 06:02:42 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/host Oct 5 06:02:42 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/opts Oct 5 06:02:42 localhost podman[324877]: 2025-10-05 10:02:42.735272569 +0000 UTC m=+0.074123286 container kill 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:02:42 localhost ovn_controller[157794]: 2025-10-05T10:02:42Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:06:97 10.100.0.12 Oct 5 06:02:42 localhost ovn_controller[157794]: 2025-10-05T10:02:42Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:06:97 10.100.0.12 Oct 5 06:02:42 localhost systemd[1]: session-78.scope: Deactivated successfully. Oct 5 06:02:42 localhost systemd-logind[760]: Session 78 logged out. Waiting for processes to exit. Oct 5 06:02:42 localhost systemd-logind[760]: Removed session 78. Oct 5 06:02:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:43.017 272040 INFO neutron.agent.dhcp.agent [None req-db59a070-2072-442a-b4a9-d1a8a5fa0e7a - - - - - -] DHCP configuration for ports {'80ac185f-decf-4b22-b82f-c32355c9fc3d'} is completed#033[00m Oct 5 06:02:43 localhost nova_compute[297021]: 2025-10-05 10:02:43.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:43 localhost nova_compute[297021]: 2025-10-05 10:02:43.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:43 localhost ovn_controller[157794]: 2025-10-05T10:02:43Z|00127|binding|INFO|Claiming lport 1374da87-a9a5-4840-80a7-197494b76131 for this chassis. Oct 5 06:02:43 localhost ovn_controller[157794]: 2025-10-05T10:02:43Z|00128|binding|INFO|1374da87-a9a5-4840-80a7-197494b76131: Claiming fa:16:3e:4b:06:97 10.100.0.12 Oct 5 06:02:43 localhost ovn_controller[157794]: 2025-10-05T10:02:43Z|00129|binding|INFO|Claiming lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee for this chassis. Oct 5 06:02:43 localhost ovn_controller[157794]: 2025-10-05T10:02:43Z|00130|binding|INFO|3fa04c44-9142-4d6c-991f-aca11ea8e8ee: Claiming fa:16:3e:ce:90:0e 19.80.0.175 Oct 5 06:02:43 localhost ovn_controller[157794]: 2025-10-05T10:02:43Z|00131|binding|INFO|Setting lport 1374da87-a9a5-4840-80a7-197494b76131 up in Southbound Oct 5 06:02:43 localhost ovn_controller[157794]: 2025-10-05T10:02:43Z|00132|binding|INFO|Setting lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee up in Southbound Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.932 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:90:0e 19.80.0.175'], port_security=['fa:16:3e:ce:90:0e 19.80.0.175'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['1374da87-a9a5-4840-80a7-197494b76131'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-973969040', 'neutron:cidrs': '19.80.0.175/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-973969040', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c80697f7-3043-40b9-ba7e-9e4d45b917f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3fa04c44-9142-4d6c-991f-aca11ea8e8ee) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.936 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:06:97 10.100.0.12'], port_security=['fa:16:3e:4b:06:97 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-738433439', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9493e121-6caf-4009-9106-31c87685c480', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-738433439', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0269f0ba-15e7-46b3-9fe6-9a4bc91e9d33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1374da87-a9a5-4840-80a7-197494b76131) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.938 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 3fa04c44-9142-4d6c-991f-aca11ea8e8ee in datapath 3b6dd988-c148-4dbf-ae5b-dba073193ccc bound to our chassis#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.941 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2400a2a9-29eb-4b1c-95c8-95af2ea69cd7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.942 163434 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b6dd988-c148-4dbf-ae5b-dba073193ccc#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.953 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e4949f32-2099-4e2a-bf94-6a5271fa02a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.954 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b6dd988-c1 in ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.956 163567 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b6dd988-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.956 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[07ef9bfb-4089-49de-8a21-29100dcd0e8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.958 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8371ceb2-db68-4255-a5ae-7d53dd8876d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.973 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5c49f0-7018-4727-9877-aa0d950a2b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:43.998 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[937cdedf-3785-401a-976d-d4317872a58a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:44.022 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-8958f746-4503-48be-b2a9-7764a3a89978 req-dfba47f5-8f4f-41d7-8b7d-4bb8fdc00592 18771fb2bfdc4183936e6691c1fde428 ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] This port is not SRIOV, skip binding for port 1374da87-a9a5-4840-80a7-197494b76131.#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.028 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[04cf3ae0-da6d-4771-8326-ae5d65861c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.035 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[63eef698-22c2-495c-a725-56b14735aaf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost systemd-udevd[324808]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:44 localhost NetworkManager[5981]: [1759658564.0392] manager: (tap3b6dd988-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/27) Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.071 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2e28e5-3612-4ca5-bc9d-e50b48e6b26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.075 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[ed48b6b2-3b88-49f7-a5ff-5c72b5b9fbd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3b6dd988-c1: link becomes ready Oct 5 06:02:44 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3b6dd988-c0: link becomes ready Oct 5 06:02:44 localhost NetworkManager[5981]: [1759658564.1028] device (tap3b6dd988-c0): carrier: link connected Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.108 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[e808d664-1375-470c-9fe7-464a8909e5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.126 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[722f630a-3f2b-4032-81da-9a222f32dea7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b6dd988-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:22:5c:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1204325, 'reachable_time': 15300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324921, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.148 2 INFO nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Post operation of migration started#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.148 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0bcc75-b861-4267-ab70-a20d04c7822f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:5c84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1204325, 'tstamp': 1204325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324922, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.166 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5da907-389c-4a15-9740-6c082a9379f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b6dd988-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:22:5c:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1204325, 'reachable_time': 15300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324923, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.201 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca609cf-98db-4775-9fd0-d56fed9a978d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.265 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1824a1a8-3e9d-4154-8c2a-af193722db72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.267 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b6dd988-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.267 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.268 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b6dd988-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:44 localhost kernel: device tap3b6dd988-c0 entered promiscuous mode Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.279 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b6dd988-c0, col_values=(('external_ids', {'iface-id': 'bac74788-cacd-4240-bc16-90e5547e0313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:44 localhost ovn_controller[157794]: 2025-10-05T10:02:44Z|00133|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.285 163434 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b6dd988-c148-4dbf-ae5b-dba073193ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b6dd988-c148-4dbf-ae5b-dba073193ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.286 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb7c157-656b-446c-8ebd-e00c50d8bb3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.287 163434 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: global Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: log /dev/log local0 debug Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: log-tag haproxy-metadata-proxy-3b6dd988-c148-4dbf-ae5b-dba073193ccc Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: user root Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: group root Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: maxconn 1024 Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: pidfile /var/lib/neutron/external/pids/3b6dd988-c148-4dbf-ae5b-dba073193ccc.pid.haproxy Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: daemon Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: defaults Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: log global Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: mode http Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: option httplog Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: option dontlognull Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: option http-server-close Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: option forwardfor Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: retries 3 Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: timeout http-request 30s Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: timeout connect 30s Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: timeout client 32s Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: timeout server 32s Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: timeout http-keep-alive 30s Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: listen listener Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: bind 169.254.169.254:80 Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: http-request add-header X-OVN-Network-ID 3b6dd988-c148-4dbf-ae5b-dba073193ccc Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.288 163434 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'env', 'PROCESS_TAG=haproxy-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b6dd988-c148-4dbf-ae5b-dba073193ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.347 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.348 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquired lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.348 2 DEBUG nova.network.neutron [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:02:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:02:44 localhost systemd[1]: tmp-crun.SAPEDb.mount: Deactivated successfully. Oct 5 06:02:44 localhost podman[324950]: 2025-10-05 10:02:44.682067976 +0000 UTC m=+0.089358710 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.683 2 DEBUG nova.network.neutron [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating instance_info_cache with network_info: [{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.718 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Releasing lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:44 localhost podman[324950]: 2025-10-05 10:02:44.721803516 +0000 UTC m=+0.129094240 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.734 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.734 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.734 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:44 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:02:44 localhost nova_compute[297021]: 2025-10-05 10:02:44.742 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Oct 5 06:02:44 localhost journal[207037]: Domain id=4 name='instance-00000007' uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded is tainted: custom-monitor Oct 5 06:02:44 localhost podman[324967]: Oct 5 06:02:44 localhost podman[324967]: 2025-10-05 10:02:44.75877392 +0000 UTC m=+0.112760325 container create 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:44 localhost systemd[1]: Started libpod-conmon-4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835.scope. Oct 5 06:02:44 localhost podman[324967]: 2025-10-05 10:02:44.714006574 +0000 UTC m=+0.067993009 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:02:44 localhost systemd[1]: Started libcrun container. Oct 5 06:02:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d8b23463d509415c279fbd92d2bea5c569d67e52be7233786bcad24db471c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:44 localhost podman[324967]: 2025-10-05 10:02:44.83749642 +0000 UTC m=+0.191482825 container init 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:44 localhost podman[324967]: 2025-10-05 10:02:44.844015277 +0000 UTC m=+0.198001682 container start 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:02:44 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [NOTICE] (324998) : New worker (325000) forked Oct 5 06:02:44 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [NOTICE] (324998) : Loading success. Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.904 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1374da87-a9a5-4840-80a7-197494b76131 in datapath 9493e121-6caf-4009-9106-31c87685c480 unbound from our chassis#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.908 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6d1e4624-6fb5-4702-a61e-2573f14d74f8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.908 163434 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.916 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[7741a255-a00f-48a6-93f7-4f4899ac2b1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.916 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9493e121-61 in ovnmeta-9493e121-6caf-4009-9106-31c87685c480 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.919 163567 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9493e121-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.919 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[643bab34-da4f-4d48-8765-fef392b9f32f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.921 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[915a880b-80d2-432c-8dcc-7b5a333ea6c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.930 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e1a2e3-f0c5-4545-a03f-f3f2106f689b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.943 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[252d4a65-37d9-41ec-ba4a-0aed872af3e3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.964 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1a639e-e98f-41d3-975e-7494690b02ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost NetworkManager[5981]: [1759658564.9710] manager: (tap9493e121-60): new Veth device (/org/freedesktop/NetworkManager/Devices/28) Oct 5 06:02:44 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:44.969 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b3431fb5-9f29-43e2-ab90-6bfe3c114769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:44 localhost systemd-udevd[324914]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.003 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[cc35460c-fd96-4bac-924a-b296bb3924e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.007 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9b5c27-0b4e-4941-a043-8ed358a9b7ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9493e121-60: link becomes ready Oct 5 06:02:45 localhost NetworkManager[5981]: [1759658565.0275] device (tap9493e121-60): carrier: link connected Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.030 163625 DEBUG oslo.privsep.daemon [-] privsep: reply[c91b4128-5f43-4de6-bd86-28b5a7475348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.047 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d367d43f-fcd0-4a2d-8fb6-2d9d23534692]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9493e121-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a2:ce:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1204418, 'reachable_time': 41103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325019, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.062 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b5149bfd-9638-4308-8620-85d124d6dcca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:ceee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1204418, 'tstamp': 1204418}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325020, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.079 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab955cd-9997-4bd9-a7f5-b9de485387c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9493e121-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a2:ce:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1204418, 'reachable_time': 41103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325021, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.103 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae3111b-3c28-4bd1-a1c2-e6229faece69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.163 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e334665a-c7fc-4693-beb4-7b06e5bffc5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.165 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9493e121-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.165 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.166 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9493e121-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:45 localhost nova_compute[297021]: 2025-10-05 10:02:45.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:45 localhost kernel: device tap9493e121-60 entered promiscuous mode Oct 5 06:02:45 localhost nova_compute[297021]: 2025-10-05 10:02:45.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.197 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9493e121-60, col_values=(('external_ids', {'iface-id': '3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:45 localhost nova_compute[297021]: 2025-10-05 10:02:45.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:45 localhost ovn_controller[157794]: 2025-10-05T10:02:45Z|00134|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:45 localhost nova_compute[297021]: 2025-10-05 10:02:45.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.203 163434 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9493e121-6caf-4009-9106-31c87685c480.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9493e121-6caf-4009-9106-31c87685c480.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.204 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce7e9e0-1891-46c1-939c-c213143f44ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.205 163434 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: global Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: log /dev/log local0 debug Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: log-tag haproxy-metadata-proxy-9493e121-6caf-4009-9106-31c87685c480 Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: user root Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: group root Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: maxconn 1024 Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: pidfile /var/lib/neutron/external/pids/9493e121-6caf-4009-9106-31c87685c480.pid.haproxy Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: daemon Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: defaults Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: log global Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: mode http Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: option httplog Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: option dontlognull Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: option http-server-close Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: option forwardfor Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: retries 3 Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: timeout http-request 30s Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: timeout connect 30s Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: timeout client 32s Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: timeout server 32s Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: timeout http-keep-alive 30s Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: listen listener Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: bind 169.254.169.254:80 Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: http-request add-header X-OVN-Network-ID 9493e121-6caf-4009-9106-31c87685c480 Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:02:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:45.206 163434 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'env', 'PROCESS_TAG=haproxy-9493e121-6caf-4009-9106-31c87685c480', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9493e121-6caf-4009-9106-31c87685c480.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:02:45 localhost nova_compute[297021]: 2025-10-05 10:02:45.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:45 localhost podman[325053]: Oct 5 06:02:45 localhost podman[325053]: 2025-10-05 10:02:45.629005885 +0000 UTC m=+0.085623639 container create beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:02:45 localhost systemd[1]: Started libpod-conmon-beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1.scope. Oct 5 06:02:45 localhost podman[325053]: 2025-10-05 10:02:45.573578688 +0000 UTC m=+0.030196502 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:02:45 localhost systemd[1]: Started libcrun container. Oct 5 06:02:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd8e8c23397872c6f54bcbf683bb4a6cbd94a98b07bbd4b1d8d838c995f9588/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:45 localhost podman[325053]: 2025-10-05 10:02:45.695677656 +0000 UTC m=+0.152295410 container init beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:02:45 localhost podman[325053]: 2025-10-05 10:02:45.705579596 +0000 UTC m=+0.162197350 container start beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:02:45 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [NOTICE] (325071) : New worker (325073) forked Oct 5 06:02:45 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [NOTICE] (325071) : Loading success. Oct 5 06:02:45 localhost nova_compute[297021]: 2025-10-05 10:02:45.751 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Oct 5 06:02:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:46 localhost nova_compute[297021]: 2025-10-05 10:02:46.758 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Oct 5 06:02:46 localhost nova_compute[297021]: 2025-10-05 10:02:46.764 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:46 localhost nova_compute[297021]: 2025-10-05 10:02:46.795 2 DEBUG nova.objects.instance [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Oct 5 06:02:47 localhost podman[325099]: 2025-10-05 10:02:47.236755406 +0000 UTC m=+0.058680336 container kill 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:02:47 localhost dnsmasq[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/addn_hosts - 0 addresses Oct 5 06:02:47 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/host Oct 5 06:02:47 localhost dnsmasq-dhcp[324710]: read /var/lib/neutron/dhcp/f019caf2-f140-40b3-a7d1-19d0fd0e8a5e/opts Oct 5 06:02:47 localhost ovn_controller[157794]: 2025-10-05T10:02:47Z|00135|binding|INFO|Releasing lport 5541c352-d8e1-4fa6-9cbe-7297a76a9005 from this chassis (sb_readonly=0) Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:47 localhost ovn_controller[157794]: 2025-10-05T10:02:47Z|00136|binding|INFO|Setting lport 5541c352-d8e1-4fa6-9cbe-7297a76a9005 down in Southbound Oct 5 06:02:47 localhost kernel: device tap5541c352-d8 left promiscuous mode Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:47.419 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10f9adaef10b420fadc2449804b80832', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbfa472a-c33c-4cb5-b1c8-ba60d8f673f9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5541c352-d8e1-4fa6-9cbe-7297a76a9005) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:47.422 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 5541c352-d8e1-4fa6-9cbe-7297a76a9005 in datapath f019caf2-f140-40b3-a7d1-19d0fd0e8a5e unbound from our chassis#033[00m Oct 5 06:02:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:47.425 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:47.426 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1366c8e9-e5ea-4276-a584-481d7c58752f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.789 2 DEBUG nova.compute.manager [req-8b0ddfaf-8c12-4b57-81d0-7d5ea742bb3a req-e25135b5-ad32-4e46-af87-434de8fd4eed 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.789 2 DEBUG oslo_concurrency.lockutils [req-8b0ddfaf-8c12-4b57-81d0-7d5ea742bb3a req-e25135b5-ad32-4e46-af87-434de8fd4eed 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.790 2 DEBUG oslo_concurrency.lockutils [req-8b0ddfaf-8c12-4b57-81d0-7d5ea742bb3a req-e25135b5-ad32-4e46-af87-434de8fd4eed 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.790 2 DEBUG oslo_concurrency.lockutils [req-8b0ddfaf-8c12-4b57-81d0-7d5ea742bb3a req-e25135b5-ad32-4e46-af87-434de8fd4eed 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.791 2 DEBUG nova.compute.manager [req-8b0ddfaf-8c12-4b57-81d0-7d5ea742bb3a req-e25135b5-ad32-4e46-af87-434de8fd4eed 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:47 localhost nova_compute[297021]: 2025-10-05 10:02:47.791 2 WARNING nova.compute.manager [req-8b0ddfaf-8c12-4b57-81d0-7d5ea742bb3a req-e25135b5-ad32-4e46-af87-434de8fd4eed 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received unexpected event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with vm_state active and task_state None.#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.399 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.400 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.400 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.400 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.401 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.402 2 INFO nova.compute.manager [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Terminating instance#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.404 2 DEBUG nova.compute.manager [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Oct 5 06:02:48 localhost kernel: device tap1374da87-a9 left promiscuous mode Oct 5 06:02:48 localhost NetworkManager[5981]: [1759658568.5193] device (tap1374da87-a9): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00137|binding|INFO|Releasing lport 1374da87-a9a5-4840-80a7-197494b76131 from this chassis (sb_readonly=0) Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00138|binding|INFO|Setting lport 1374da87-a9a5-4840-80a7-197494b76131 down in Southbound Oct 5 06:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00139|binding|INFO|Releasing lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee from this chassis (sb_readonly=0) Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00140|binding|INFO|Setting lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee down in Southbound Oct 5 06:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00141|binding|INFO|Removing iface tap1374da87-a9 ovn-installed in OVS Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.569 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:90:0e 19.80.0.175'], port_security=['fa:16:3e:ce:90:0e 19.80.0.175'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['1374da87-a9a5-4840-80a7-197494b76131'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-973969040', 'neutron:cidrs': '19.80.0.175/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-973969040', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c80697f7-3043-40b9-ba7e-9e4d45b917f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3fa04c44-9142-4d6c-991f-aca11ea8e8ee) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00142|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00143|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:48 localhost ovn_controller[157794]: 2025-10-05T10:02:48Z|00144|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.572 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:06:97 10.100.0.12'], port_security=['fa:16:3e:4b:06:97 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-738433439', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9493e121-6caf-4009-9106-31c87685c480', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-738433439', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0269f0ba-15e7-46b3-9fe6-9a4bc91e9d33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1374da87-a9a5-4840-80a7-197494b76131) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.574 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 3fa04c44-9142-4d6c-991f-aca11ea8e8ee in datapath 3b6dd988-c148-4dbf-ae5b-dba073193ccc unbound from our chassis#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.578 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2400a2a9-29eb-4b1c-95c8-95af2ea69cd7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.578 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b6dd988-c148-4dbf-ae5b-dba073193ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.580 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[541accb0-1050-4f00-8cda-ac7947523358]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.580 163434 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc namespace which is not needed anymore#033[00m Oct 5 06:02:48 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Deactivated successfully. Oct 5 06:02:48 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Consumed 1.554s CPU time. Oct 5 06:02:48 localhost systemd-machined[84982]: Machine qemu-4-instance-00000007 terminated. Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.644 2 INFO nova.virt.libvirt.driver [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Instance destroyed successfully.#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.644 2 DEBUG nova.objects.instance [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lazy-loading 'resources' on Instance uuid b1dce7a2-b06b-4cdb-b072-ccd123742ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.659 2 DEBUG nova.virt.libvirt.vif [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-05T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2001023684',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471150.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2001023684',id=7,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T10:02:28Z,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005471150.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='1b069d6351214d1baf4ff391a6512beb',ramdisk_id='',reservation_id='r-k8v41bv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1030348059',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:02:46Z,user_data=None,user_id='b56f1071781246a68c1693519a9cd054',uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.659 2 DEBUG nova.network.os_vif_util [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Converting VIF {"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.660 2 DEBUG nova.network.os_vif_util [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.660 2 DEBUG os_vif [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1374da87-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.674 2 INFO os_vif [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9')#033[00m Oct 5 06:02:48 localhost systemd[1]: tmp-crun.5gTsGc.mount: Deactivated successfully. Oct 5 06:02:48 localhost podman[325127]: 2025-10-05 10:02:48.709879535 +0000 UTC m=+0.126133616 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:02:48 localhost podman[325128]: 2025-10-05 10:02:48.728653465 +0000 UTC m=+0.151814244 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.build-date=20251001) Oct 5 06:02:48 localhost podman[325127]: 2025-10-05 10:02:48.744771093 +0000 UTC m=+0.161025204 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:02:48 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:02:48 localhost podman[325128]: 2025-10-05 10:02:48.766093922 +0000 UTC m=+0.189254691 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd) Oct 5 06:02:48 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:02:48 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [NOTICE] (324998) : haproxy version is 2.8.14-c23fe91 Oct 5 06:02:48 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [NOTICE] (324998) : path to executable is /usr/sbin/haproxy Oct 5 06:02:48 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [WARNING] (324998) : Exiting Master process... Oct 5 06:02:48 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [ALERT] (324998) : Current worker (325000) exited with code 143 (Terminated) Oct 5 06:02:48 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[324994]: [WARNING] (324998) : All workers exited. Exiting... (0) Oct 5 06:02:48 localhost systemd[1]: libpod-4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835.scope: Deactivated successfully. Oct 5 06:02:48 localhost podman[325192]: 2025-10-05 10:02:48.828203339 +0000 UTC m=+0.114358577 container died 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:02:48 localhost podman[325192]: 2025-10-05 10:02:48.859514179 +0000 UTC m=+0.145669407 container cleanup 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:02:48 localhost podman[325219]: 2025-10-05 10:02:48.893623266 +0000 UTC m=+0.062275743 container cleanup 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:02:48 localhost systemd[1]: libpod-conmon-4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835.scope: Deactivated successfully. Oct 5 06:02:48 localhost podman[325234]: 2025-10-05 10:02:48.953278025 +0000 UTC m=+0.077832555 container remove 4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.964 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ee021ca3-6d73-44ce-9c68-dc0dfec855f3]: (4, ('Sun Oct 5 10:02:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc (4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835)\n4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835\nSun Oct 5 10:02:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc (4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835)\n4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.966 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[6337a5e3-3166-4927-bb94-64c0f857fd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.967 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b6dd988-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost kernel: device tap3b6dd988-c0 left promiscuous mode Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.976 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[5683119a-ab71-45d8-a781-214821e63543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:48 localhost nova_compute[297021]: 2025-10-05 10:02:48.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.992 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[766e252d-ca6e-4b87-a69e-e78f35957820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:48.994 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[4eeb0a28-2301-48ed-9bcb-942d3793aa94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.010 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[036c48bb-4cd5-492b-be13-db48e187353a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1204317, 'reachable_time': 29381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325253, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.012 163645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.012 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[64d80d90-30d3-49ea-a20d-bc04720d06e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.013 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1374da87-a9a5-4840-80a7-197494b76131 in datapath 9493e121-6caf-4009-9106-31c87685c480 unbound from our chassis#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.015 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6d1e4624-6fb5-4702-a61e-2573f14d74f8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.015 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9493e121-6caf-4009-9106-31c87685c480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.016 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8273f636-9109-4e02-99e6-f126b05eab44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.016 163434 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9493e121-6caf-4009-9106-31c87685c480 namespace which is not needed anymore#033[00m Oct 5 06:02:49 localhost ovn_controller[157794]: 2025-10-05T10:02:49Z|00145|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:49 localhost ovn_controller[157794]: 2025-10-05T10:02:49Z|00146|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:49 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [NOTICE] (325071) : haproxy version is 2.8.14-c23fe91 Oct 5 06:02:49 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [NOTICE] (325071) : path to executable is /usr/sbin/haproxy Oct 5 06:02:49 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [WARNING] (325071) : Exiting Master process... Oct 5 06:02:49 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [ALERT] (325071) : Current worker (325073) exited with code 143 (Terminated) Oct 5 06:02:49 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[325067]: [WARNING] (325071) : All workers exited. Exiting... (0) Oct 5 06:02:49 localhost systemd[1]: libpod-beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1.scope: Deactivated successfully. Oct 5 06:02:49 localhost podman[325272]: 2025-10-05 10:02:49.219203007 +0000 UTC m=+0.079109379 container died beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:49 localhost podman[325272]: 2025-10-05 10:02:49.259040639 +0000 UTC m=+0.118946971 container cleanup beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:02:49 localhost podman[325286]: 2025-10-05 10:02:49.298987335 +0000 UTC m=+0.071296198 container cleanup beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:02:49 localhost systemd[1]: libpod-conmon-beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1.scope: Deactivated successfully. Oct 5 06:02:49 localhost podman[325300]: 2025-10-05 10:02:49.36842977 +0000 UTC m=+0.086509320 container remove beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.373 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[44353c3e-a254-438b-8175-5407b20e3aa1]: (4, ('Sun Oct 5 10:02:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480 (beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1)\nbeed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1\nSun Oct 5 10:02:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480 (beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1)\nbeed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.375 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[4b228755-72eb-4fb7-8a28-9ccdc5345ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.376 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9493e121-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:49 localhost kernel: device tap9493e121-60 left promiscuous mode Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.398 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a4558714-5940-4233-9671-e7da1f6700e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.401 2 INFO nova.virt.libvirt.driver [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Deleting instance files /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded_del#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.402 2 INFO nova.virt.libvirt.driver [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Deletion of /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded_del complete#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.415 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d88868dc-57d3-48a2-9d7d-ea95290bb719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.416 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d6b612-822e-4d47-936b-a55b154f5c20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.431 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8d22b75a-6edf-46f2-bb13-7e28c7072cdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1204411, 'reachable_time': 21848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325318, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.438 163645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9493e121-6caf-4009-9106-31c87685c480 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:02:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:49.438 163645 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf82a4b-f301-40a9-aeb5-9bb372fd583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.465 2 INFO nova.compute.manager [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.467 2 DEBUG oslo.service.loopingcall [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.467 2 DEBUG nova.compute.manager [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.468 2 DEBUG nova.network.neutron [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Oct 5 06:02:49 localhost dnsmasq[324710]: exiting on receipt of SIGTERM Oct 5 06:02:49 localhost podman[325335]: 2025-10-05 10:02:49.595429344 +0000 UTC m=+0.064819921 container kill 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:02:49 localhost systemd[1]: libpod-49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae.scope: Deactivated successfully. Oct 5 06:02:49 localhost podman[325347]: 2025-10-05 10:02:49.66704728 +0000 UTC m=+0.060476334 container died 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:02:49 localhost systemd[1]: var-lib-containers-storage-overlay-7bd8e8c23397872c6f54bcbf683bb4a6cbd94a98b07bbd4b1d8d838c995f9588-merged.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-beed7c7c1ec621bca6f9e194b8ab7f8b2129c13fd2ca125145ad72b005ddc4f1-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: run-netns-ovnmeta\x2d9493e121\x2d6caf\x2d4009\x2d9106\x2d31c87685c480.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: var-lib-containers-storage-overlay-51d8b23463d509415c279fbd92d2bea5c569d67e52be7233786bcad24db471c6-merged.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ac9e4ebf327c36ebd0ccf44f65f242210a31944e012f1b8529bd71acbb22835-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: run-netns-ovnmeta\x2d3b6dd988\x2dc148\x2d4dbf\x2dae5b\x2ddba073193ccc.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:49 localhost systemd[1]: var-lib-containers-storage-overlay-0bfbfa1ede400325cfdfe592371797e3059bebaee27167f4149e791136f9c9cd-merged.mount: Deactivated successfully. Oct 5 06:02:49 localhost podman[325347]: 2025-10-05 10:02:49.704684442 +0000 UTC m=+0.098113446 container cleanup 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:02:49 localhost systemd[1]: libpod-conmon-49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae.scope: Deactivated successfully. Oct 5 06:02:49 localhost podman[325352]: 2025-10-05 10:02:49.746178679 +0000 UTC m=+0.128800589 container remove 49b38f6fde0c15c7a0888e3f1a12ab689a49d9432e2e685542ff5049079d4bae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f019caf2-f140-40b3-a7d1-19d0fd0e8a5e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:49.780 272040 INFO neutron.agent.dhcp.agent [None req-9bc85f52-a542-4cc7-9526-cc8c37ab0302 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:49 localhost systemd[1]: run-netns-qdhcp\x2df019caf2\x2df140\x2d40b3\x2da7d1\x2d19d0fd0e8a5e.mount: Deactivated successfully. Oct 5 06:02:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:49.780 272040 INFO neutron.agent.dhcp.agent [None req-9bc85f52-a542-4cc7-9526-cc8c37ab0302 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.902 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.902 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.903 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.903 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.904 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.904 2 WARNING nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received unexpected event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with vm_state active and task_state deleting.#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.904 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.904 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.905 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.905 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.905 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.906 2 WARNING nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received unexpected event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with vm_state active and task_state deleting.#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.906 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.906 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.907 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.907 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.907 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.908 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.908 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.908 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.908 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.909 2 DEBUG oslo_concurrency.lockutils [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.909 2 DEBUG nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:49 localhost nova_compute[297021]: 2025-10-05 10:02:49.909 2 WARNING nova.compute.manager [req-c2b17ed6-345f-4afa-979d-f9a54299ed9f req-93299be3-1094-43b0-a6e1-73b24eeea0ac 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received unexpected event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with vm_state active and task_state deleting.#033[00m Oct 5 06:02:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:50.543 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:10Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1374da87-a9a5-4840-80a7-197494b76131, ip_allocation=immediate, mac_address=fa:16:3e:4b:06:97, name=tempest-parent-738433439, network_id=9493e121-6caf-4009-9106-31c87685c480, port_security_enabled=True, project_id=1b069d6351214d1baf4ff391a6512beb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=14, security_groups=['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149'], standard_attr_id=485, status=DOWN, tags=[], tenant_id=1b069d6351214d1baf4ff391a6512beb, trunk_details=sub_ports=[], trunk_id=b550f6bc-4b02-45ea-9fde-d1fa93bf86e6, updated_at=2025-10-05T10:02:50Z on network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:02:50 localhost podman[325392]: 2025-10-05 10:02:50.761983856 +0000 UTC m=+0.048007405 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:50 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 2 addresses Oct 5 06:02:50 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:02:50 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.768 2 DEBUG nova.network.neutron [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.784 2 INFO nova.compute.manager [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Took 1.32 seconds to deallocate network for instance.#033[00m Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.828 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.828 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.831 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.881 2 INFO nova.scheduler.client.report [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Deleted allocations for instance b1dce7a2-b06b-4cdb-b072-ccd123742ded#033[00m Oct 5 06:02:50 localhost nova_compute[297021]: 2025-10-05 10:02:50.952 2 DEBUG oslo_concurrency.lockutils [None req-2fdf0e3c-3376-4f1a-9855-1ca8134832ff b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:50.988 272040 INFO neutron.agent.dhcp.agent [None req-7be360a2-f281-4438-bcf6-56b4a6b978ee - - - - - -] DHCP configuration for ports {'1374da87-a9a5-4840-80a7-197494b76131'} is completed#033[00m Oct 5 06:02:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:51 localhost podman[248506]: time="2025-10-05T10:02:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:02:51 localhost podman[248506]: @ - - [05/Oct/2025:10:02:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149324 "" "Go-http-client/1.1" Oct 5 06:02:51 localhost podman[248506]: @ - - [05/Oct/2025:10:02:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20320 "" "Go-http-client/1.1" Oct 5 06:02:52 localhost openstack_network_exporter[250601]: ERROR 10:02:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:02:52 localhost openstack_network_exporter[250601]: ERROR 10:02:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:52 localhost openstack_network_exporter[250601]: ERROR 10:02:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:52 localhost openstack_network_exporter[250601]: ERROR 10:02:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:02:52 localhost openstack_network_exporter[250601]: Oct 5 06:02:52 localhost openstack_network_exporter[250601]: ERROR 10:02:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:02:52 localhost openstack_network_exporter[250601]: Oct 5 06:02:52 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:52.741 2 INFO neutron.agent.securitygroups_rpc [None req-750911e5-7403-463f-90a2-497baee3fcf4 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:52 localhost dnsmasq[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/addn_hosts - 0 addresses Oct 5 06:02:52 localhost dnsmasq-dhcp[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/host Oct 5 06:02:52 localhost dnsmasq-dhcp[324008]: read /var/lib/neutron/dhcp/3b6dd988-c148-4dbf-ae5b-dba073193ccc/opts Oct 5 06:02:52 localhost podman[325433]: 2025-10-05 10:02:52.970567855 +0000 UTC m=+0.050525403 container kill ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:53 localhost systemd[1]: Stopping User Manager for UID 42436... Oct 5 06:02:53 localhost systemd[324776]: Activating special unit Exit the Session... Oct 5 06:02:53 localhost systemd[324776]: Stopped target Main User Target. Oct 5 06:02:53 localhost systemd[324776]: Stopped target Basic System. Oct 5 06:02:53 localhost systemd[324776]: Stopped target Paths. Oct 5 06:02:53 localhost systemd[324776]: Stopped target Sockets. Oct 5 06:02:53 localhost systemd[324776]: Stopped target Timers. Oct 5 06:02:53 localhost systemd[324776]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 06:02:53 localhost systemd[324776]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 06:02:53 localhost systemd[324776]: Closed D-Bus User Message Bus Socket. Oct 5 06:02:53 localhost systemd[324776]: Stopped Create User's Volatile Files and Directories. Oct 5 06:02:53 localhost systemd[324776]: Removed slice User Application Slice. Oct 5 06:02:53 localhost systemd[324776]: Reached target Shutdown. Oct 5 06:02:53 localhost systemd[324776]: Finished Exit the Session. Oct 5 06:02:53 localhost systemd[324776]: Reached target Exit the Session. Oct 5 06:02:53 localhost systemd[1]: user@42436.service: Deactivated successfully. Oct 5 06:02:53 localhost systemd[1]: Stopped User Manager for UID 42436. Oct 5 06:02:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Oct 5 06:02:53 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Oct 5 06:02:53 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Oct 5 06:02:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Oct 5 06:02:53 localhost systemd[1]: Removed slice User Slice of UID 42436. Oct 5 06:02:53 localhost ovn_controller[157794]: 2025-10-05T10:02:53Z|00147|binding|INFO|Removing iface tap0f2d4d75-e7 ovn-installed in OVS Oct 5 06:02:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:53.386 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2400a2a9-29eb-4b1c-95c8-95af2ea69cd7 with type ""#033[00m Oct 5 06:02:53 localhost ovn_controller[157794]: 2025-10-05T10:02:53Z|00148|binding|INFO|Removing lport 0f2d4d75-e75c-43f9-8fce-7465c9a57717 ovn-installed in OVS Oct 5 06:02:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:53.387 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c80697f7-3043-40b9-ba7e-9e4d45b917f9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f2d4d75-e75c-43f9-8fce-7465c9a57717) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:53.389 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 0f2d4d75-e75c-43f9-8fce-7465c9a57717 in datapath 3b6dd988-c148-4dbf-ae5b-dba073193ccc unbound from our chassis#033[00m Oct 5 06:02:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:53.392 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b6dd988-c148-4dbf-ae5b-dba073193ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:53.393 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5d4e80-c854-4e94-818e-cf7a3a972f03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:53 localhost systemd[1]: tmp-crun.n07V73.mount: Deactivated successfully. Oct 5 06:02:53 localhost nova_compute[297021]: 2025-10-05 10:02:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:53 localhost podman[325472]: 2025-10-05 10:02:53.440102567 +0000 UTC m=+0.116055443 container kill ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:02:53 localhost dnsmasq[324008]: exiting on receipt of SIGTERM Oct 5 06:02:53 localhost systemd[1]: libpod-ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e.scope: Deactivated successfully. Oct 5 06:02:53 localhost podman[325484]: 2025-10-05 10:02:53.526303808 +0000 UTC m=+0.070608499 container died ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e92 do_prune osdmap full prune enabled Oct 5 06:02:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e93 e93: 6 total, 6 up, 6 in Oct 5 06:02:53 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e93: 6 total, 6 up, 6 in Oct 5 06:02:53 localhost podman[325484]: 2025-10-05 10:02:53.561079542 +0000 UTC m=+0.105384193 container cleanup ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:53 localhost systemd[1]: libpod-conmon-ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e.scope: Deactivated successfully. Oct 5 06:02:53 localhost podman[325487]: 2025-10-05 10:02:53.600106432 +0000 UTC m=+0.132872889 container remove ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b6dd988-c148-4dbf-ae5b-dba073193ccc, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:02:53 localhost kernel: device tap0f2d4d75-e7 left promiscuous mode Oct 5 06:02:53 localhost nova_compute[297021]: 2025-10-05 10:02:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:53 localhost nova_compute[297021]: 2025-10-05 10:02:53.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:53.650 272040 INFO neutron.agent.dhcp.agent [None req-6aee27ca-b1c1-4c3c-ac61-fa6ad864af87 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:53.651 272040 INFO neutron.agent.dhcp.agent [None req-6aee27ca-b1c1-4c3c-ac61-fa6ad864af87 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:53 localhost nova_compute[297021]: 2025-10-05 10:02:53.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:53 localhost nova_compute[297021]: 2025-10-05 10:02:53.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:53 localhost ovn_controller[157794]: 2025-10-05T10:02:53Z|00149|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:53 localhost nova_compute[297021]: 2025-10-05 10:02:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:53 localhost systemd[1]: var-lib-containers-storage-overlay-e8bad2f41e9cbfa02a9a26d70764461c62da1bf606c4e946e61c1d7f215a1636-merged.mount: Deactivated successfully. Oct 5 06:02:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed0ce7fd40fdee5a50bba1bc98fe9d7f6a7b2fc9744561fd4936bde19aa3811e-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:53 localhost systemd[1]: run-netns-qdhcp\x2d3b6dd988\x2dc148\x2d4dbf\x2dae5b\x2ddba073193ccc.mount: Deactivated successfully. Oct 5 06:02:54 localhost neutron_sriov_agent[264984]: 2025-10-05 10:02:54.068 2 INFO neutron.agent.securitygroups_rpc [None req-1d6ffaf6-6781-4c7e-bba6-1906ee2d0509 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:54 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 1 addresses Oct 5 06:02:54 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:02:54 localhost podman[325531]: 2025-10-05 10:02:54.337614141 +0000 UTC m=+0.078544524 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:02:54 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:02:54 localhost dnsmasq[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/addn_hosts - 0 addresses Oct 5 06:02:54 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/host Oct 5 06:02:54 localhost dnsmasq-dhcp[323390]: read /var/lib/neutron/dhcp/9493e121-6caf-4009-9106-31c87685c480/opts Oct 5 06:02:54 localhost podman[325569]: 2025-10-05 10:02:54.930350429 +0000 UTC m=+0.060876605 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:02:55 localhost ovn_controller[157794]: 2025-10-05T10:02:55Z|00150|binding|INFO|Releasing lport cc68d2d0-cdaa-4495-848c-84e3ef78e69c from this chassis (sb_readonly=0) Oct 5 06:02:55 localhost ovn_controller[157794]: 2025-10-05T10:02:55Z|00151|binding|INFO|Setting lport cc68d2d0-cdaa-4495-848c-84e3ef78e69c down in Southbound Oct 5 06:02:55 localhost nova_compute[297021]: 2025-10-05 10:02:55.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:55 localhost kernel: device tapcc68d2d0-cd left promiscuous mode Oct 5 06:02:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:55.147 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-9493e121-6caf-4009-9106-31c87685c480', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9493e121-6caf-4009-9106-31c87685c480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0269f0ba-15e7-46b3-9fe6-9a4bc91e9d33, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cc68d2d0-cdaa-4495-848c-84e3ef78e69c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:55.149 163434 INFO neutron.agent.ovn.metadata.agent [-] Port cc68d2d0-cdaa-4495-848c-84e3ef78e69c in datapath 9493e121-6caf-4009-9106-31c87685c480 unbound from our chassis#033[00m Oct 5 06:02:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:55.152 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9493e121-6caf-4009-9106-31c87685c480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:02:55.153 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0a32b2-ecc7-48dd-ae1c-7d4bf44280a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:55 localhost nova_compute[297021]: 2025-10-05 10:02:55.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e93 do_prune osdmap full prune enabled Oct 5 06:02:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e94 e94: 6 total, 6 up, 6 in Oct 5 06:02:55 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e94: 6 total, 6 up, 6 in Oct 5 06:02:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e94 do_prune osdmap full prune enabled Oct 5 06:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:02:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e95 e95: 6 total, 6 up, 6 in Oct 5 06:02:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e95: 6 total, 6 up, 6 in Oct 5 06:02:56 localhost systemd[1]: tmp-crun.emHdUr.mount: Deactivated successfully. Oct 5 06:02:56 localhost podman[325593]: 2025-10-05 10:02:56.684062426 +0000 UTC m=+0.078453172 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 06:02:56 localhost podman[325593]: 2025-10-05 10:02:56.716527927 +0000 UTC m=+0.110918733 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001) Oct 5 06:02:56 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:02:57 localhost ovn_controller[157794]: 2025-10-05T10:02:57Z|00152|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:02:57 localhost nova_compute[297021]: 2025-10-05 10:02:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:57 localhost podman[325627]: 2025-10-05 10:02:57.914251305 +0000 UTC m=+0.074893386 container kill 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:02:57 localhost dnsmasq[323390]: exiting on receipt of SIGTERM Oct 5 06:02:57 localhost systemd[1]: libpod-6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517.scope: Deactivated successfully. Oct 5 06:02:58 localhost podman[325643]: 2025-10-05 10:02:58.002922093 +0000 UTC m=+0.063293880 container died 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:58 localhost systemd[1]: var-lib-containers-storage-overlay-ad0270bf4b6aed4fbe454a52f88392f170cea66ac0ba8bb40d161fde5ac5b1fb-merged.mount: Deactivated successfully. Oct 5 06:02:58 localhost podman[325643]: 2025-10-05 10:02:58.103521455 +0000 UTC m=+0.163893192 container remove 6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:02:58 localhost systemd[1]: libpod-conmon-6e853f2102f992d65e23db3f3a3cd81cdf9263abeda0bdab6501272532dad517.scope: Deactivated successfully. Oct 5 06:02:58 localhost systemd[1]: run-netns-qdhcp\x2d9493e121\x2d6caf\x2d4009\x2d9106\x2d31c87685c480.mount: Deactivated successfully. Oct 5 06:02:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:58.477 272040 INFO neutron.agent.dhcp.agent [None req-568dc65e-311c-4c0a-810d-a8e84871d293 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:58.478 272040 INFO neutron.agent.dhcp.agent [None req-568dc65e-311c-4c0a-810d-a8e84871d293 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:02:58.639 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:58 localhost nova_compute[297021]: 2025-10-05 10:02:58.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:58 localhost nova_compute[297021]: 2025-10-05 10:02:58.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:02:59 localhost podman[325668]: 2025-10-05 10:02:59.683470423 +0000 UTC m=+0.087007845 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0) Oct 5 06:02:59 localhost podman[325668]: 2025-10-05 10:02:59.76108179 +0000 UTC m=+0.164619172 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller) Oct 5 06:02:59 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:03:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e95 do_prune osdmap full prune enabled Oct 5 06:03:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e96 e96: 6 total, 6 up, 6 in Oct 5 06:03:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e96: 6 total, 6 up, 6 in Oct 5 06:03:01 localhost nova_compute[297021]: 2025-10-05 10:03:01.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:03:02 localhost podman[325694]: 2025-10-05 10:03:02.678731367 +0000 UTC m=+0.091447205 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 06:03:02 localhost podman[325694]: 2025-10-05 10:03:02.690816004 +0000 UTC m=+0.103531822 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:03:02 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:03:02 localhost ovn_controller[157794]: 2025-10-05T10:03:02Z|00153|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:03:02 localhost nova_compute[297021]: 2025-10-05 10:03:02.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:03 localhost nova_compute[297021]: 2025-10-05 10:03:03.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:03 localhost nova_compute[297021]: 2025-10-05 10:03:03.641 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:03 localhost nova_compute[297021]: 2025-10-05 10:03:03.642 2 INFO nova.compute.manager [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Stopped (Lifecycle Event)#033[00m Oct 5 06:03:03 localhost nova_compute[297021]: 2025-10-05 10:03:03.659 2 DEBUG nova.compute.manager [None req-49fd5a7c-66e3-41df-a470-8bbd3a81aea1 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:03 localhost nova_compute[297021]: 2025-10-05 10:03:03.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:03 localhost nova_compute[297021]: 2025-10-05 10:03:03.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:03:04 localhost podman[325714]: 2025-10-05 10:03:04.67864924 +0000 UTC m=+0.085176895 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:03:04 localhost podman[325714]: 2025-10-05 10:03:04.695938679 +0000 UTC m=+0.102466364 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 06:03:04 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:03:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:06 localhost nova_compute[297021]: 2025-10-05 10:03:06.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:07 localhost nova_compute[297021]: 2025-10-05 10:03:07.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e96 do_prune osdmap full prune enabled Oct 5 06:03:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e97 e97: 6 total, 6 up, 6 in Oct 5 06:03:07 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e97: 6 total, 6 up, 6 in Oct 5 06:03:07 localhost nova_compute[297021]: 2025-10-05 10:03:07.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:08 localhost nova_compute[297021]: 2025-10-05 10:03:08.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:08 localhost nova_compute[297021]: 2025-10-05 10:03:08.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:09 localhost neutron_sriov_agent[264984]: 2025-10-05 10:03:09.572 2 INFO neutron.agent.securitygroups_rpc [None req-658ae260-e241-4207-aff8-7d411df32887 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group rule updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:03:09 localhost podman[325735]: 2025-10-05 10:03:09.668964674 +0000 UTC m=+0.080602120 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:03:09 localhost podman[325735]: 2025-10-05 10:03:09.680840206 +0000 UTC m=+0.092477622 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:03:09 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:03:09 localhost neutron_sriov_agent[264984]: 2025-10-05 10:03:09.763 2 INFO neutron.agent.securitygroups_rpc [None req-2959de52-f7c2-4f59-a482-69a1686405fe d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group rule updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:10 localhost nova_compute[297021]: 2025-10-05 10:03:10.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:10 localhost nova_compute[297021]: 2025-10-05 10:03:10.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:03:11 localhost nova_compute[297021]: 2025-10-05 10:03:11.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:11 localhost nova_compute[297021]: 2025-10-05 10:03:11.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:12 localhost nova_compute[297021]: 2025-10-05 10:03:12.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.460 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.461 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.461 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.461 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.462 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:13 localhost nova_compute[297021]: 2025-10-05 10:03:13.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:14 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3874513291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.016 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.093 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.094 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.309 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.311 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11294MB free_disk=41.700164794921875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.311 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.311 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.422 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.422 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.422 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:03:14 localhost nova_compute[297021]: 2025-10-05 10:03:14.471 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:15 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3963675820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:15 localhost nova_compute[297021]: 2025-10-05 10:03:15.017 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:15 localhost nova_compute[297021]: 2025-10-05 10:03:15.022 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:15 localhost nova_compute[297021]: 2025-10-05 10:03:15.050 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:15 localhost neutron_sriov_agent[264984]: 2025-10-05 10:03:15.069 2 INFO neutron.agent.securitygroups_rpc [req-007b9dd9-858a-4fdf-818c-a5235e42ef11 req-32195d07-d5fb-43db-8bc6-dbd8908366b6 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group member updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:15 localhost nova_compute[297021]: 2025-10-05 10:03:15.088 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:03:15 localhost nova_compute[297021]: 2025-10-05 10:03:15.088 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:03:15 localhost podman[325803]: 2025-10-05 10:03:15.685804646 +0000 UTC m=+0.088820203 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:03:15 localhost podman[325803]: 2025-10-05 10:03:15.6977413 +0000 UTC m=+0.100756907 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:03:15 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:03:16 localhost ovn_controller[157794]: 2025-10-05T10:03:16Z|00154|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:03:16 localhost nova_compute[297021]: 2025-10-05 10:03:16.089 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:16 localhost nova_compute[297021]: 2025-10-05 10:03:16.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:16 localhost nova_compute[297021]: 2025-10-05 10:03:16.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e97 do_prune osdmap full prune enabled Oct 5 06:03:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e98 e98: 6 total, 6 up, 6 in Oct 5 06:03:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e98: 6 total, 6 up, 6 in Oct 5 06:03:17 localhost nova_compute[297021]: 2025-10-05 10:03:17.416 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:18 localhost nova_compute[297021]: 2025-10-05 10:03:18.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:18 localhost nova_compute[297021]: 2025-10-05 10:03:18.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:03:18 localhost nova_compute[297021]: 2025-10-05 10:03:18.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:03:18 localhost nova_compute[297021]: 2025-10-05 10:03:18.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:19 localhost nova_compute[297021]: 2025-10-05 10:03:19.525 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:19 localhost nova_compute[297021]: 2025-10-05 10:03:19.525 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:19 localhost nova_compute[297021]: 2025-10-05 10:03:19.525 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:03:19 localhost nova_compute[297021]: 2025-10-05 10:03:19.526 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:03:19 localhost systemd[1]: tmp-crun.rx1Fwg.mount: Deactivated successfully. Oct 5 06:03:19 localhost podman[325827]: 2025-10-05 10:03:19.699755345 +0000 UTC m=+0.099680228 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 06:03:19 localhost podman[325828]: 2025-10-05 10:03:19.742631429 +0000 UTC m=+0.139479339 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Oct 5 06:03:19 localhost podman[325828]: 2025-10-05 10:03:19.755386896 +0000 UTC m=+0.152234776 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:19 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:03:19 localhost podman[325827]: 2025-10-05 10:03:19.810341558 +0000 UTC m=+0.210266441 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:19 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:03:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:20.465 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:20.466 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:20.467 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:20 localhost nova_compute[297021]: 2025-10-05 10:03:20.580 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:20 localhost nova_compute[297021]: 2025-10-05 10:03:20.617 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:20 localhost nova_compute[297021]: 2025-10-05 10:03:20.618 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:03:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:21 localhost podman[248506]: time="2025-10-05T10:03:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:03:21 localhost podman[248506]: @ - - [05/Oct/2025:10:03:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:03:21 localhost podman[248506]: @ - - [05/Oct/2025:10:03:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19373 "" "Go-http-client/1.1" Oct 5 06:03:22 localhost openstack_network_exporter[250601]: ERROR 10:03:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:03:22 localhost openstack_network_exporter[250601]: ERROR 10:03:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:22 localhost openstack_network_exporter[250601]: ERROR 10:03:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:22 localhost openstack_network_exporter[250601]: ERROR 10:03:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:03:22 localhost openstack_network_exporter[250601]: Oct 5 06:03:22 localhost openstack_network_exporter[250601]: ERROR 10:03:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:03:22 localhost openstack_network_exporter[250601]: Oct 5 06:03:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e98 do_prune osdmap full prune enabled Oct 5 06:03:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e99 e99: 6 total, 6 up, 6 in Oct 5 06:03:23 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e99: 6 total, 6 up, 6 in Oct 5 06:03:23 localhost nova_compute[297021]: 2025-10-05 10:03:23.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:23 localhost nova_compute[297021]: 2025-10-05 10:03:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:23 localhost nova_compute[297021]: 2025-10-05 10:03:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:03:23 localhost nova_compute[297021]: 2025-10-05 10:03:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:23 localhost nova_compute[297021]: 2025-10-05 10:03:23.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:23 localhost nova_compute[297021]: 2025-10-05 10:03:23.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:03:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:03:24 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:03:24 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:03:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e99 do_prune osdmap full prune enabled Oct 5 06:03:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e100 e100: 6 total, 6 up, 6 in Oct 5 06:03:25 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e100: 6 total, 6 up, 6 in Oct 5 06:03:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e100 do_prune osdmap full prune enabled Oct 5 06:03:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e101 e101: 6 total, 6 up, 6 in Oct 5 06:03:26 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e101: 6 total, 6 up, 6 in Oct 5 06:03:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:03:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:03:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:03:27 localhost podman[325951]: 2025-10-05 10:03:27.671640171 +0000 UTC m=+0.081097564 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:27 localhost podman[325951]: 2025-10-05 10:03:27.707796383 +0000 UTC m=+0.117253706 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:03:27 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:28 localhost nova_compute[297021]: 2025-10-05 10:03:28.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:28.852 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:28.853 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.558 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Acquiring lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.558 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.559 2 INFO nova.compute.manager [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Unshelving#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.655 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.656 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.658 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'pci_requests' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.678 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'numa_topology' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.701 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.701 2 INFO nova.compute.claims [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Claim successful on node np0005471150.localdomain#033[00m Oct 5 06:03:29 localhost nova_compute[297021]: 2025-10-05 10:03:29.809 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:30 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:30 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1472535906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.255 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.262 2 DEBUG nova.compute.provider_tree [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.279 2 DEBUG nova.scheduler.client.report [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.309 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.339 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Acquiring lock "refresh_cache-9e6b5f89-78fa-4e32-8553-6278cf0120a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.340 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Acquired lock "refresh_cache-9e6b5f89-78fa-4e32-8553-6278cf0120a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.340 2 DEBUG nova.network.neutron [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.385 2 DEBUG nova.network.neutron [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.486 2 DEBUG nova.network.neutron [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.508 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Releasing lock "refresh_cache-9e6b5f89-78fa-4e32-8553-6278cf0120a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.510 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.511 2 INFO nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Creating image(s)#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.548 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] rbd image 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.553 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.606 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] rbd image 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.653 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] rbd image 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.659 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Acquiring lock "0eaeaaacfc0a608ad0111fb51f8ae5f73b16552b" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.660 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "0eaeaaacfc0a608ad0111fb51f8ae5f73b16552b" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:30 localhost podman[326014]: 2025-10-05 10:03:30.68254169 +0000 UTC m=+0.089725498 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.713 2 DEBUG nova.virt.libvirt.imagebackend [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Image locations are: [{'url': 'rbd://659062ac-50b4-5607-b699-3105da7f55ee/images/87dea3d5-c58b-4738-adf0-6ae9d84b530a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://659062ac-50b4-5607-b699-3105da7f55ee/images/87dea3d5-c58b-4738-adf0-6ae9d84b530a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Oct 5 06:03:30 localhost podman[326014]: 2025-10-05 10:03:30.723949175 +0000 UTC m=+0.131133043 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:03:30 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.833 2 DEBUG nova.virt.libvirt.imagebackend [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Selected location: {'url': 'rbd://659062ac-50b4-5607-b699-3105da7f55ee/images/87dea3d5-c58b-4738-adf0-6ae9d84b530a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Oct 5 06:03:30 localhost nova_compute[297021]: 2025-10-05 10:03:30.834 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] cloning images/87dea3d5-c58b-4738-adf0-6ae9d84b530a@snap to None/9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Oct 5 06:03:31 localhost nova_compute[297021]: 2025-10-05 10:03:31.134 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "0eaeaaacfc0a608ad0111fb51f8ae5f73b16552b" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:31 localhost nova_compute[297021]: 2025-10-05 10:03:31.356 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'migration_context' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:31 localhost nova_compute[297021]: 2025-10-05 10:03:31.450 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] flattening vms/9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Oct 5 06:03:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e101 do_prune osdmap full prune enabled Oct 5 06:03:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e102 e102: 6 total, 6 up, 6 in Oct 5 06:03:31 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e102: 6 total, 6 up, 6 in Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.272 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Image rbd:vms/9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.273 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.274 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Ensure instance console log exists: /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.274 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.275 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.275 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.278 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-05T10:03:09Z,direct_url=,disk_format='raw',id=87dea3d5-c58b-4738-adf0-6ae9d84b530a,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1091743413-shelved',owner='483a378f938c4810b4c4c8fa8748972a',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-10-05T10:03:27Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'image_id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.283 2 WARNING nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.285 2 DEBUG nova.virt.libvirt.host [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Searching host: 'np0005471150.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.286 2 DEBUG nova.virt.libvirt.host [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.288 2 DEBUG nova.virt.libvirt.host [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Searching host: 'np0005471150.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.289 2 DEBUG nova.virt.libvirt.host [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.289 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.290 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-05T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='97ddc44b-feec-4b28-874c-024e6ebcea56',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-05T10:03:09Z,direct_url=,disk_format='raw',id=87dea3d5-c58b-4738-adf0-6ae9d84b530a,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1091743413-shelved',owner='483a378f938c4810b4c4c8fa8748972a',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-10-05T10:03:27Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.291 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.291 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.292 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.293 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.293 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.294 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.294 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.295 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.295 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.296 2 DEBUG nova.virt.hardware [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.296 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.329 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:03:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3604332082' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.813 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.836 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] rbd image 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:32 localhost nova_compute[297021]: 2025-10-05 10:03:32.839 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:33 localhost snmpd[68045]: empty variable list in _query Oct 5 06:03:33 localhost snmpd[68045]: empty variable list in _query Oct 5 06:03:33 localhost snmpd[68045]: empty variable list in _query Oct 5 06:03:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:03:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2832437238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.311 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.314 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.330 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] End _get_guest_xml xml= Oct 5 06:03:33 localhost nova_compute[297021]: 9e6b5f89-78fa-4e32-8553-6278cf0120a2 Oct 5 06:03:33 localhost nova_compute[297021]: instance-00000008 Oct 5 06:03:33 localhost nova_compute[297021]: 131072 Oct 5 06:03:33 localhost nova_compute[297021]: 1 Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: tempest-UnshelveToHostMultiNodesTest-server-1091743413 Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:32 Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: 128 Oct 5 06:03:33 localhost nova_compute[297021]: 1 Oct 5 06:03:33 localhost nova_compute[297021]: 0 Oct 5 06:03:33 localhost nova_compute[297021]: 0 Oct 5 06:03:33 localhost nova_compute[297021]: 1 Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: tempest-UnshelveToHostMultiNodesTest-231347573-project-member Oct 5 06:03:33 localhost nova_compute[297021]: tempest-UnshelveToHostMultiNodesTest-231347573 Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: RDO Oct 5 06:03:33 localhost nova_compute[297021]: OpenStack Compute Oct 5 06:03:33 localhost nova_compute[297021]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 5 06:03:33 localhost nova_compute[297021]: 9e6b5f89-78fa-4e32-8553-6278cf0120a2 Oct 5 06:03:33 localhost nova_compute[297021]: 9e6b5f89-78fa-4e32-8553-6278cf0120a2 Oct 5 06:03:33 localhost nova_compute[297021]: Virtual Machine Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: hvm Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: /dev/urandom Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: Oct 5 06:03:33 localhost nova_compute[297021]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.375 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.376 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.377 2 INFO nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Using config drive#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.413 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] rbd image 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.438 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.474 2 DEBUG nova.objects.instance [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lazy-loading 'keypairs' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.529 2 INFO nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Creating config drive at /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/disk.config#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.536 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprv5u2wqw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.665 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprv5u2wqw" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:33 localhost podman[326313]: 2025-10-05 10:03:33.680772234 +0000 UTC m=+0.081252177 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:03:33 localhost podman[326313]: 2025-10-05 10:03:33.696327157 +0000 UTC m=+0.096807040 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:33 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.712 2 DEBUG nova.storage.rbd_utils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] rbd image 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.717 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/disk.config 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.983 2 DEBUG oslo_concurrency.processutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/disk.config 9e6b5f89-78fa-4e32-8553-6278cf0120a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:33 localhost nova_compute[297021]: 2025-10-05 10:03:33.985 2 INFO nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Deleting local config drive /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2/disk.config because it was imported into RBD.#033[00m Oct 5 06:03:34 localhost systemd-machined[84982]: New machine qemu-5-instance-00000008. Oct 5 06:03:34 localhost systemd[1]: Started Virtual Machine qemu-5-instance-00000008. Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.760 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.761 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] VM Resumed (Lifecycle Event)#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.763 2 DEBUG nova.compute.manager [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.764 2 DEBUG nova.virt.libvirt.driver [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.768 2 INFO nova.virt.libvirt.driver [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Instance spawned successfully.#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.803 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.807 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.841 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.842 2 DEBUG nova.virt.driver [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.842 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] VM Started (Lifecycle Event)#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.860 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.864 2 DEBUG nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:03:34 localhost nova_compute[297021]: 2025-10-05 10:03:34.891 2 INFO nova.compute.manager [None req-eec47342-b37c-4e32-a725-b7e6b02bf801 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 5 06:03:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e102 do_prune osdmap full prune enabled Oct 5 06:03:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e103 e103: 6 total, 6 up, 6 in Oct 5 06:03:35 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e103: 6 total, 6 up, 6 in Oct 5 06:03:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:03:35 localhost podman[326426]: 2025-10-05 10:03:35.711249647 +0000 UTC m=+0.103161092 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41) Oct 5 06:03:35 localhost podman[326426]: 2025-10-05 10:03:35.727982022 +0000 UTC m=+0.119893467 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7) Oct 5 06:03:35 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:03:36 localhost nova_compute[297021]: 2025-10-05 10:03:36.098 2 DEBUG nova.compute.manager [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:36 localhost nova_compute[297021]: 2025-10-05 10:03:36.185 2 DEBUG oslo_concurrency.lockutils [None req-847c65d6-e6bb-46a0-b7fa-b3f4eed9eb0a e6c6fe1034d042678d091fc6231c46fa 2e50a952472d495d9a7d0541722bd04d - - default default] Lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 6.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:36 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:36.856 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.518 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Acquiring lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.519 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.520 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Acquiring lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.520 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.521 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.522 2 INFO nova.compute.manager [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Terminating instance#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.524 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Acquiring lock "refresh_cache-9e6b5f89-78fa-4e32-8553-6278cf0120a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.524 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Acquired lock "refresh_cache-9e6b5f89-78fa-4e32-8553-6278cf0120a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.525 2 DEBUG nova.network.neutron [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.576 2 DEBUG nova.network.neutron [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.695 2 DEBUG nova.network.neutron [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.710 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Releasing lock "refresh_cache-9e6b5f89-78fa-4e32-8553-6278cf0120a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.711 2 DEBUG nova.compute.manager [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Oct 5 06:03:37 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully. Oct 5 06:03:37 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 3.796s CPU time. Oct 5 06:03:37 localhost systemd-machined[84982]: Machine qemu-5-instance-00000008 terminated. Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.933 2 INFO nova.virt.libvirt.driver [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Instance destroyed successfully.#033[00m Oct 5 06:03:37 localhost nova_compute[297021]: 2025-10-05 10:03:37.934 2 DEBUG nova.objects.instance [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lazy-loading 'resources' on Instance uuid 9e6b5f89-78fa-4e32-8553-6278cf0120a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.592 2 INFO nova.virt.libvirt.driver [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Deleting instance files /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2_del#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.593 2 INFO nova.virt.libvirt.driver [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Deletion of /var/lib/nova/instances/9e6b5f89-78fa-4e32-8553-6278cf0120a2_del complete#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.639 2 INFO nova.compute.manager [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.640 2 DEBUG oslo.service.loopingcall [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.641 2 DEBUG nova.compute.manager [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.641 2 DEBUG nova.network.neutron [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.791 2 DEBUG nova.network.neutron [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.806 2 DEBUG nova.network.neutron [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.816 2 INFO nova.compute.manager [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Took 0.18 seconds to deallocate network for instance.#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.852 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.853 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:38 localhost nova_compute[297021]: 2025-10-05 10:03:38.920 2 DEBUG oslo_concurrency.processutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:39 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:39 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3202924842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:39 localhost nova_compute[297021]: 2025-10-05 10:03:39.378 2 DEBUG oslo_concurrency.processutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:39 localhost nova_compute[297021]: 2025-10-05 10:03:39.385 2 DEBUG nova.compute.provider_tree [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:39 localhost nova_compute[297021]: 2025-10-05 10:03:39.403 2 DEBUG nova.scheduler.client.report [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:39 localhost nova_compute[297021]: 2025-10-05 10:03:39.427 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:39 localhost nova_compute[297021]: 2025-10-05 10:03:39.455 2 INFO nova.scheduler.client.report [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Deleted allocations for instance 9e6b5f89-78fa-4e32-8553-6278cf0120a2#033[00m Oct 5 06:03:39 localhost nova_compute[297021]: 2025-10-05 10:03:39.529 2 DEBUG oslo_concurrency.lockutils [None req-3a8c174a-a5ef-4f63-827c-7e740ba17961 12765113e1e14c169192609aa10f1612 483a378f938c4810b4c4c8fa8748972a - - default default] Lock "9e6b5f89-78fa-4e32-8553-6278cf0120a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:03:40 localhost podman[326489]: 2025-10-05 10:03:40.669205975 +0000 UTC m=+0.079771507 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:03:40 localhost podman[326489]: 2025-10-05 10:03:40.681712375 +0000 UTC m=+0.092277917 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:03:40 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:03:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e103 do_prune osdmap full prune enabled Oct 5 06:03:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e104 e104: 6 total, 6 up, 6 in Oct 5 06:03:41 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e104: 6 total, 6 up, 6 in Oct 5 06:03:43 localhost nova_compute[297021]: 2025-10-05 10:03:43.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:43 localhost nova_compute[297021]: 2025-10-05 10:03:43.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:43 localhost nova_compute[297021]: 2025-10-05 10:03:43.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:03:43 localhost nova_compute[297021]: 2025-10-05 10:03:43.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:43 localhost nova_compute[297021]: 2025-10-05 10:03:43.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:43 localhost nova_compute[297021]: 2025-10-05 10:03:43.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:03:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:03:46 localhost podman[326512]: 2025-10-05 10:03:46.679264624 +0000 UTC m=+0.086300734 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:03:46 localhost podman[326512]: 2025-10-05 10:03:46.690900671 +0000 UTC m=+0.097936781 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:03:46 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:03:47 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:47.460 272040 INFO neutron.agent.linux.ip_lib [None req-93ce9167-bdc8-48bc-97a1-7751a4f8fa77 - - - - - -] Device tape5c17064-5a cannot be used as it has no MAC address#033[00m Oct 5 06:03:47 localhost nova_compute[297021]: 2025-10-05 10:03:47.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:47 localhost kernel: device tape5c17064-5a entered promiscuous mode Oct 5 06:03:47 localhost ovn_controller[157794]: 2025-10-05T10:03:47Z|00155|binding|INFO|Claiming lport e5c17064-5ad1-46c5-903a-7496da00709d for this chassis. Oct 5 06:03:47 localhost ovn_controller[157794]: 2025-10-05T10:03:47Z|00156|binding|INFO|e5c17064-5ad1-46c5-903a-7496da00709d: Claiming unknown Oct 5 06:03:47 localhost nova_compute[297021]: 2025-10-05 10:03:47.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:47 localhost NetworkManager[5981]: [1759658627.4903] manager: (tape5c17064-5a): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Oct 5 06:03:47 localhost systemd-udevd[326545]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:03:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:47.500 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-39e5c582-d241-406e-8646-5f1118b9aecb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39e5c582-d241-406e-8646-5f1118b9aecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23d0921d70724e3aab0ac10fdc837c26', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fff71ec0-e204-4bae-ba45-6b6fc1b00cda, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e5c17064-5ad1-46c5-903a-7496da00709d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:47.502 163434 INFO neutron.agent.ovn.metadata.agent [-] Port e5c17064-5ad1-46c5-903a-7496da00709d in datapath 39e5c582-d241-406e-8646-5f1118b9aecb bound to our chassis#033[00m Oct 5 06:03:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:47.504 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 39e5c582-d241-406e-8646-5f1118b9aecb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:03:47 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:47.505 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[34f0dae7-c826-4a74-a879-04b76782262e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost ovn_controller[157794]: 2025-10-05T10:03:47Z|00157|binding|INFO|Setting lport e5c17064-5ad1-46c5-903a-7496da00709d ovn-installed in OVS Oct 5 06:03:47 localhost ovn_controller[157794]: 2025-10-05T10:03:47Z|00158|binding|INFO|Setting lport e5c17064-5ad1-46c5-903a-7496da00709d up in Southbound Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost nova_compute[297021]: 2025-10-05 10:03:47.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost journal[237931]: ethtool ioctl error on tape5c17064-5a: No such device Oct 5 06:03:47 localhost nova_compute[297021]: 2025-10-05 10:03:47.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:47 localhost nova_compute[297021]: 2025-10-05 10:03:47.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:48 localhost podman[326617]: Oct 5 06:03:48 localhost podman[326617]: 2025-10-05 10:03:48.517894858 +0000 UTC m=+0.093000067 container create 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:48 localhost systemd[1]: Started libpod-conmon-2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f.scope. Oct 5 06:03:48 localhost podman[326617]: 2025-10-05 10:03:48.472009041 +0000 UTC m=+0.047114310 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:03:48 localhost systemd[1]: tmp-crun.WH5Bzd.mount: Deactivated successfully. Oct 5 06:03:48 localhost systemd[1]: Started libcrun container. Oct 5 06:03:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faaeaf4bfc77cbfff9f4dcc8ee23e1ca5cc9350f1e8d0527b2604e00af3ee304/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:03:48 localhost podman[326617]: 2025-10-05 10:03:48.602565057 +0000 UTC m=+0.177670266 container init 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:48 localhost podman[326617]: 2025-10-05 10:03:48.612074075 +0000 UTC m=+0.187179324 container start 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:03:48 localhost dnsmasq[326634]: started, version 2.85 cachesize 150 Oct 5 06:03:48 localhost dnsmasq[326634]: DNS service limited to local subnets Oct 5 06:03:48 localhost dnsmasq[326634]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:03:48 localhost dnsmasq[326634]: warning: no upstream servers configured Oct 5 06:03:48 localhost dnsmasq-dhcp[326634]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:03:48 localhost dnsmasq[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/addn_hosts - 0 addresses Oct 5 06:03:48 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/host Oct 5 06:03:48 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/opts Oct 5 06:03:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e104 do_prune osdmap full prune enabled Oct 5 06:03:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e105 e105: 6 total, 6 up, 6 in Oct 5 06:03:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e105: 6 total, 6 up, 6 in Oct 5 06:03:48 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:48.723 272040 INFO neutron.agent.dhcp.agent [None req-43a7b5f8-ff19-40b1-83cc-3e591bb4ae40 - - - - - -] DHCP configuration for ports {'dfe302da-3086-418b-a937-ec8ef6134051'} is completed#033[00m Oct 5 06:03:48 localhost nova_compute[297021]: 2025-10-05 10:03:48.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:49.169 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:03:48Z, description=, device_id=c3bef2e4-2970-42ac-8376-79c49bcac37b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=695b0d6f-c73e-4e1d-b5ac-c21b8306db62, ip_allocation=immediate, mac_address=fa:16:3e:b1:cf:8a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:03:46Z, description=, dns_domain=, id=39e5c582-d241-406e-8646-5f1118b9aecb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1013513681, port_security_enabled=True, project_id=23d0921d70724e3aab0ac10fdc837c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60440, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=846, status=ACTIVE, subnets=['547a30e7-08df-41a3-bae6-ceecefb4ba96'], tags=[], tenant_id=23d0921d70724e3aab0ac10fdc837c26, updated_at=2025-10-05T10:03:46Z, vlan_transparent=None, network_id=39e5c582-d241-406e-8646-5f1118b9aecb, port_security_enabled=False, project_id=23d0921d70724e3aab0ac10fdc837c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=863, status=DOWN, tags=[], tenant_id=23d0921d70724e3aab0ac10fdc837c26, updated_at=2025-10-05T10:03:49Z on network 39e5c582-d241-406e-8646-5f1118b9aecb#033[00m Oct 5 06:03:49 localhost dnsmasq[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/addn_hosts - 1 addresses Oct 5 06:03:49 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/host Oct 5 06:03:49 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/opts Oct 5 06:03:49 localhost podman[326652]: 2025-10-05 10:03:49.409533912 +0000 UTC m=+0.061514251 container kill 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:03:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:49.795 272040 INFO neutron.agent.dhcp.agent [None req-54d8dc4f-a68f-4d90-9a11-8a4d8d90ecab - - - - - -] DHCP configuration for ports {'695b0d6f-c73e-4e1d-b5ac-c21b8306db62'} is completed#033[00m Oct 5 06:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:03:50 localhost systemd[1]: tmp-crun.g6m9u8.mount: Deactivated successfully. Oct 5 06:03:50 localhost podman[326672]: 2025-10-05 10:03:50.699477494 +0000 UTC m=+0.099434391 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:03:50 localhost podman[326672]: 2025-10-05 10:03:50.710111542 +0000 UTC m=+0.110068449 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:50 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:03:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:50.724 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:03:48Z, description=, device_id=c3bef2e4-2970-42ac-8376-79c49bcac37b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=695b0d6f-c73e-4e1d-b5ac-c21b8306db62, ip_allocation=immediate, mac_address=fa:16:3e:b1:cf:8a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:03:46Z, description=, dns_domain=, id=39e5c582-d241-406e-8646-5f1118b9aecb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1013513681, port_security_enabled=True, project_id=23d0921d70724e3aab0ac10fdc837c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60440, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=846, status=ACTIVE, subnets=['547a30e7-08df-41a3-bae6-ceecefb4ba96'], tags=[], tenant_id=23d0921d70724e3aab0ac10fdc837c26, updated_at=2025-10-05T10:03:46Z, vlan_transparent=None, network_id=39e5c582-d241-406e-8646-5f1118b9aecb, port_security_enabled=False, project_id=23d0921d70724e3aab0ac10fdc837c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=863, status=DOWN, tags=[], tenant_id=23d0921d70724e3aab0ac10fdc837c26, updated_at=2025-10-05T10:03:49Z on network 39e5c582-d241-406e-8646-5f1118b9aecb#033[00m Oct 5 06:03:50 localhost podman[326673]: 2025-10-05 10:03:50.795019479 +0000 UTC m=+0.192485439 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:50 localhost podman[326673]: 2025-10-05 10:03:50.812829592 +0000 UTC m=+0.210295582 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:50 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:03:50 localhost dnsmasq[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/addn_hosts - 1 addresses Oct 5 06:03:50 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/host Oct 5 06:03:50 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/opts Oct 5 06:03:50 localhost podman[326729]: 2025-10-05 10:03:50.943604414 +0000 UTC m=+0.062850848 container kill 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:51.148 272040 INFO neutron.agent.dhcp.agent [None req-8ed211c7-3164-4ad3-96d8-93d61458fbd0 - - - - - -] DHCP configuration for ports {'695b0d6f-c73e-4e1d-b5ac-c21b8306db62'} is completed#033[00m Oct 5 06:03:51 localhost neutron_sriov_agent[264984]: 2025-10-05 10:03:51.433 2 INFO neutron.agent.securitygroups_rpc [None req-2785e779-2925-4423-b26e-6eb40ca7212b f63fee7c8d0d4b7b9ec136ffedafd342 23d0921d70724e3aab0ac10fdc837c26 - - default default] Security group member updated ['d459832e-70ec-4fc9-937f-0daa53e0fda7']#033[00m Oct 5 06:03:51 localhost podman[248506]: time="2025-10-05T10:03:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:03:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:51.465 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:03:51Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=90420969-c4a4-479e-a952-6891d4a2fe3e, ip_allocation=immediate, mac_address=fa:16:3e:51:a8:00, name=tempest-FloatingIPNegativeTestJSON-1514235353, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:03:46Z, description=, dns_domain=, id=39e5c582-d241-406e-8646-5f1118b9aecb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1013513681, port_security_enabled=True, project_id=23d0921d70724e3aab0ac10fdc837c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60440, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=846, status=ACTIVE, subnets=['547a30e7-08df-41a3-bae6-ceecefb4ba96'], tags=[], tenant_id=23d0921d70724e3aab0ac10fdc837c26, updated_at=2025-10-05T10:03:46Z, vlan_transparent=None, network_id=39e5c582-d241-406e-8646-5f1118b9aecb, port_security_enabled=True, project_id=23d0921d70724e3aab0ac10fdc837c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d459832e-70ec-4fc9-937f-0daa53e0fda7'], standard_attr_id=872, status=DOWN, tags=[], tenant_id=23d0921d70724e3aab0ac10fdc837c26, updated_at=2025-10-05T10:03:51Z on network 39e5c582-d241-406e-8646-5f1118b9aecb#033[00m Oct 5 06:03:51 localhost podman[248506]: @ - - [05/Oct/2025:10:03:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147501 "" "Go-http-client/1.1" Oct 5 06:03:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:51 localhost podman[248506]: @ - - [05/Oct/2025:10:03:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19852 "" "Go-http-client/1.1" Oct 5 06:03:51 localhost dnsmasq[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/addn_hosts - 2 addresses Oct 5 06:03:51 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/host Oct 5 06:03:51 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/opts Oct 5 06:03:51 localhost podman[326767]: 2025-10-05 10:03:51.693317044 +0000 UTC m=+0.059166797 container kill 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:03:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:51.904 272040 INFO neutron.agent.dhcp.agent [None req-95ad1615-3bc6-4d01-898d-8abf673ae38e - - - - - -] DHCP configuration for ports {'90420969-c4a4-479e-a952-6891d4a2fe3e'} is completed#033[00m Oct 5 06:03:52 localhost openstack_network_exporter[250601]: ERROR 10:03:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:03:52 localhost openstack_network_exporter[250601]: ERROR 10:03:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:52 localhost openstack_network_exporter[250601]: ERROR 10:03:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:52 localhost openstack_network_exporter[250601]: ERROR 10:03:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:03:52 localhost openstack_network_exporter[250601]: Oct 5 06:03:52 localhost openstack_network_exporter[250601]: ERROR 10:03:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:03:52 localhost openstack_network_exporter[250601]: Oct 5 06:03:52 localhost nova_compute[297021]: 2025-10-05 10:03:52.931 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:52 localhost nova_compute[297021]: 2025-10-05 10:03:52.932 2 INFO nova.compute.manager [-] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] VM Stopped (Lifecycle Event)#033[00m Oct 5 06:03:52 localhost nova_compute[297021]: 2025-10-05 10:03:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:52 localhost nova_compute[297021]: 2025-10-05 10:03:52.959 2 DEBUG nova.compute.manager [None req-51faba40-f833-4cce-af4a-8f365277b4d7 - - - - - -] [instance: 9e6b5f89-78fa-4e32-8553-6278cf0120a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e105 do_prune osdmap full prune enabled Oct 5 06:03:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e106 e106: 6 total, 6 up, 6 in Oct 5 06:03:53 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e106: 6 total, 6 up, 6 in Oct 5 06:03:53 localhost nova_compute[297021]: 2025-10-05 10:03:53.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost neutron_sriov_agent[264984]: 2025-10-05 10:03:54.772 2 INFO neutron.agent.securitygroups_rpc [None req-a49a3e7f-2856-44c5-9390-f6d62ab91c62 f63fee7c8d0d4b7b9ec136ffedafd342 23d0921d70724e3aab0ac10fdc837c26 - - default default] Security group member updated ['d459832e-70ec-4fc9-937f-0daa53e0fda7']#033[00m Oct 5 06:03:55 localhost dnsmasq[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/addn_hosts - 1 addresses Oct 5 06:03:55 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/host Oct 5 06:03:55 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/opts Oct 5 06:03:55 localhost podman[326805]: 2025-10-05 10:03:55.016484715 +0000 UTC m=+0.070607389 container kill 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:03:56 localhost neutron_sriov_agent[264984]: 2025-10-05 10:03:56.361 2 INFO neutron.agent.securitygroups_rpc [req-cc241a83-bf1b-4563-99d8-792df22de69f req-7c539352-7cfc-4988-848a-55f30b42edc2 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group member updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:56 localhost dnsmasq[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/addn_hosts - 0 addresses Oct 5 06:03:56 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/host Oct 5 06:03:56 localhost podman[326843]: 2025-10-05 10:03:56.399912405 +0000 UTC m=+0.066156547 container kill 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:56 localhost dnsmasq-dhcp[326634]: read /var/lib/neutron/dhcp/39e5c582-d241-406e-8646-5f1118b9aecb/opts Oct 5 06:03:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:56 localhost nova_compute[297021]: 2025-10-05 10:03:56.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:56 localhost kernel: device tape5c17064-5a left promiscuous mode Oct 5 06:03:56 localhost ovn_controller[157794]: 2025-10-05T10:03:56Z|00159|binding|INFO|Releasing lport e5c17064-5ad1-46c5-903a-7496da00709d from this chassis (sb_readonly=0) Oct 5 06:03:56 localhost ovn_controller[157794]: 2025-10-05T10:03:56Z|00160|binding|INFO|Setting lport e5c17064-5ad1-46c5-903a-7496da00709d down in Southbound Oct 5 06:03:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:56.614 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-39e5c582-d241-406e-8646-5f1118b9aecb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39e5c582-d241-406e-8646-5f1118b9aecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23d0921d70724e3aab0ac10fdc837c26', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fff71ec0-e204-4bae-ba45-6b6fc1b00cda, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e5c17064-5ad1-46c5-903a-7496da00709d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:56.618 163434 INFO neutron.agent.ovn.metadata.agent [-] Port e5c17064-5ad1-46c5-903a-7496da00709d in datapath 39e5c582-d241-406e-8646-5f1118b9aecb unbound from our chassis#033[00m Oct 5 06:03:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:56.621 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39e5c582-d241-406e-8646-5f1118b9aecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:03:56 localhost nova_compute[297021]: 2025-10-05 10:03:56.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:03:56.622 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd1cba6-20c1-4c85-9a93-8c0d7a5218a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:57 localhost dnsmasq[326634]: exiting on receipt of SIGTERM Oct 5 06:03:57 localhost podman[326882]: 2025-10-05 10:03:57.773702205 +0000 UTC m=+0.071928235 container kill 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:57 localhost systemd[1]: tmp-crun.HG53Te.mount: Deactivated successfully. Oct 5 06:03:57 localhost systemd[1]: libpod-2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f.scope: Deactivated successfully. Oct 5 06:03:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:03:57 localhost podman[326900]: 2025-10-05 10:03:57.85527699 +0000 UTC m=+0.058786078 container died 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:03:57 localhost podman[326908]: 2025-10-05 10:03:57.895689747 +0000 UTC m=+0.086088848 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:03:57 localhost podman[326908]: 2025-10-05 10:03:57.905812522 +0000 UTC m=+0.096211653 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:03:57 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:03:58 localhost podman[326900]: 2025-10-05 10:03:58.004890183 +0000 UTC m=+0.208399231 container remove 2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e5c582-d241-406e-8646-5f1118b9aecb, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:03:58 localhost systemd[1]: libpod-conmon-2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f.scope: Deactivated successfully. Oct 5 06:03:58 localhost nova_compute[297021]: 2025-10-05 10:03:58.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:58.035 272040 INFO neutron.agent.dhcp.agent [None req-e8c54536-970f-4225-bfbb-0c9e926c3611 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:03:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:03:58.171 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:03:58 localhost ovn_controller[157794]: 2025-10-05T10:03:58Z|00161|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:03:58 localhost nova_compute[297021]: 2025-10-05 10:03:58.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:58 localhost systemd[1]: var-lib-containers-storage-overlay-faaeaf4bfc77cbfff9f4dcc8ee23e1ca5cc9350f1e8d0527b2604e00af3ee304-merged.mount: Deactivated successfully. Oct 5 06:03:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e1603d383c5ec9fecc9a22f7cce6a296b4e4cbaf2962984208c912e8379fc4f-userdata-shm.mount: Deactivated successfully. Oct 5 06:03:58 localhost systemd[1]: run-netns-qdhcp\x2d39e5c582\x2dd241\x2d406e\x2d8646\x2d5f1118b9aecb.mount: Deactivated successfully. Oct 5 06:03:58 localhost nova_compute[297021]: 2025-10-05 10:03:58.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:58 localhost nova_compute[297021]: 2025-10-05 10:03:58.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:04:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 3217 writes, 26K keys, 3216 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.08 MB/s#012Cumulative WAL: 3217 writes, 3216 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3217 writes, 26K keys, 3216 commit groups, 1.0 writes per commit group, ingest: 47.97 MB, 0.08 MB/s#012Interval WAL: 3217 writes, 3216 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 146.8 0.24 0.09 12 0.020 0 0 0.0 0.0#012 L6 1/0 15.94 MB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 4.9 159.7 143.8 1.18 0.50 11 0.108 129K 5639 0.0 0.0#012 Sum 1/0 15.94 MB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 5.9 133.1 144.3 1.42 0.58 23 0.062 129K 5639 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 5.9 133.3 144.6 1.42 0.58 22 0.064 129K 5639 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 0.0 159.7 143.8 1.18 0.50 11 0.108 129K 5639 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 148.4 0.23 0.09 11 0.021 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.034, interval 0.034#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.34 MB/s write, 0.18 GB read, 0.32 MB/s read, 1.4 seconds#012Interval compaction: 0.20 GB write, 0.34 MB/s write, 0.18 GB read, 0.32 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e0777f3350#2 capacity: 308.00 MB usage: 47.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000417 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3395,46.19 MB,14.9968%) FilterBlock(23,381.42 KB,0.120936%) IndexBlock(23,518.83 KB,0.164503%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 06:04:01 localhost ovn_controller[157794]: 2025-10-05T10:04:01Z|00162|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:01 localhost nova_compute[297021]: 2025-10-05 10:04:01.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e106 do_prune osdmap full prune enabled Oct 5 06:04:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 e107: 6 total, 6 up, 6 in Oct 5 06:04:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e107: 6 total, 6 up, 6 in Oct 5 06:04:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:04:01 localhost podman[326941]: 2025-10-05 10:04:01.6660284 +0000 UTC m=+0.078033040 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:04:01 localhost podman[326941]: 2025-10-05 10:04:01.706964162 +0000 UTC m=+0.118968842 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:04:01 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:04:03 localhost nova_compute[297021]: 2025-10-05 10:04:03.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:04:04 localhost podman[326966]: 2025-10-05 10:04:04.679688354 +0000 UTC m=+0.088938237 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:04:04 localhost podman[326966]: 2025-10-05 10:04:04.696819619 +0000 UTC m=+0.106069492 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:04:04 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:04:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.513906) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646513968, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2113, "num_deletes": 261, "total_data_size": 1994212, "memory_usage": 2031320, "flush_reason": "Manual Compaction"} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646527294, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1936401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25119, "largest_seqno": 27231, "table_properties": {"data_size": 1928002, "index_size": 5036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18370, "raw_average_key_size": 20, "raw_value_size": 1910698, "raw_average_value_size": 2127, "num_data_blocks": 221, "num_entries": 898, "num_filter_entries": 898, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658490, "oldest_key_time": 1759658490, "file_creation_time": 1759658646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13437 microseconds, and 5434 cpu microseconds. Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.527345) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1936401 bytes OK Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.527374) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.530751) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.530772) EVENT_LOG_v1 {"time_micros": 1759658646530766, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.530796) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1985328, prev total WAL file size 1985652, number of live WAL files 2. Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.531658) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303134' seq:72057594037927935, type:22 .. '6C6F676D0034323636' seq:0, type:0; will stop at (end) Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1891KB)], [45(15MB)] Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646531702, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 18655568, "oldest_snapshot_seqno": -1} Oct 5 06:04:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12554 keys, 18483483 bytes, temperature: kUnknown Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646651379, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18483483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18409858, "index_size": 41140, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31429, "raw_key_size": 337831, "raw_average_key_size": 26, "raw_value_size": 18194105, "raw_average_value_size": 1449, "num_data_blocks": 1558, "num_entries": 12554, "num_filter_entries": 12554, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.651892) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18483483 bytes Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.653971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 154.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 15.9 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(19.2) write-amplify(9.5) OK, records in: 13093, records dropped: 539 output_compression: NoCompression Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.654007) EVENT_LOG_v1 {"time_micros": 1759658646653993, "job": 26, "event": "compaction_finished", "compaction_time_micros": 119982, "compaction_time_cpu_micros": 49249, "output_level": 6, "num_output_files": 1, "total_output_size": 18483483, "num_input_records": 13093, "num_output_records": 12554, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646654922, "job": 26, "event": "table_file_deletion", "file_number": 47} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646657492, "job": 26, "event": "table_file_deletion", "file_number": 45} Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.531493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.657751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.657765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.657769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.657773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:06.657778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost podman[326985]: 2025-10-05 10:04:06.68673811 +0000 UTC m=+0.099228645 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc.) Oct 5 06:04:06 localhost podman[326985]: 2025-10-05 10:04:06.731878237 +0000 UTC m=+0.144368812 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public) Oct 5 06:04:06 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:04:07 localhost nova_compute[297021]: 2025-10-05 10:04:07.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:07.945 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:07.947 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:04:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:07.948 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:04:08 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:08.420 272040 INFO neutron.agent.linux.ip_lib [None req-258cf0e5-41ed-4a73-ac12-0b7c3cd840e3 - - - - - -] Device tap0fb63172-b0 cannot be used as it has no MAC address#033[00m Oct 5 06:04:08 localhost nova_compute[297021]: 2025-10-05 10:04:08.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:08 localhost nova_compute[297021]: 2025-10-05 10:04:08.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:08 localhost kernel: device tap0fb63172-b0 entered promiscuous mode Oct 5 06:04:08 localhost NetworkManager[5981]: [1759658648.4471] manager: (tap0fb63172-b0): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Oct 5 06:04:08 localhost nova_compute[297021]: 2025-10-05 10:04:08.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:08 localhost ovn_controller[157794]: 2025-10-05T10:04:08Z|00163|binding|INFO|Claiming lport 0fb63172-b039-41e4-9c54-319cacec4461 for this chassis. Oct 5 06:04:08 localhost ovn_controller[157794]: 2025-10-05T10:04:08Z|00164|binding|INFO|0fb63172-b039-41e4-9c54-319cacec4461: Claiming unknown Oct 5 06:04:08 localhost systemd-udevd[327014]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:08.459 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-33508787-158e-4c59-b509-bc8b1c234abc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33508787-158e-4c59-b509-bc8b1c234abc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9caf4e3-e7d0-4e19-ae41-35d7fa5fe9d8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0fb63172-b039-41e4-9c54-319cacec4461) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:08.461 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 0fb63172-b039-41e4-9c54-319cacec4461 in datapath 33508787-158e-4c59-b509-bc8b1c234abc bound to our chassis#033[00m Oct 5 06:04:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:08.463 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 33508787-158e-4c59-b509-bc8b1c234abc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:08.464 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c6f8e-437e-4d40-a8fc-e702ef88b3de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost ovn_controller[157794]: 2025-10-05T10:04:08Z|00165|binding|INFO|Setting lport 0fb63172-b039-41e4-9c54-319cacec4461 ovn-installed in OVS Oct 5 06:04:08 localhost ovn_controller[157794]: 2025-10-05T10:04:08Z|00166|binding|INFO|Setting lport 0fb63172-b039-41e4-9c54-319cacec4461 up in Southbound Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost nova_compute[297021]: 2025-10-05 10:04:08.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost journal[237931]: ethtool ioctl error on tap0fb63172-b0: No such device Oct 5 06:04:08 localhost nova_compute[297021]: 2025-10-05 10:04:08.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:08 localhost nova_compute[297021]: 2025-10-05 10:04:08.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:09 localhost nova_compute[297021]: 2025-10-05 10:04:09.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:09 localhost podman[327085]: Oct 5 06:04:09 localhost podman[327085]: 2025-10-05 10:04:09.402823563 +0000 UTC m=+0.092528694 container create 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 06:04:09 localhost systemd[1]: Started libpod-conmon-1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8.scope. Oct 5 06:04:09 localhost podman[327085]: 2025-10-05 10:04:09.358061567 +0000 UTC m=+0.047766738 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:09 localhost systemd[1]: Started libcrun container. Oct 5 06:04:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72cb480ca32e788a64b0e324ca5462f3723ca897a6f72492471b313f9d9f6789/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:09 localhost podman[327085]: 2025-10-05 10:04:09.488430828 +0000 UTC m=+0.178135949 container init 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:04:09 localhost podman[327085]: 2025-10-05 10:04:09.499769526 +0000 UTC m=+0.189474657 container start 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:04:09 localhost dnsmasq[327103]: started, version 2.85 cachesize 150 Oct 5 06:04:09 localhost dnsmasq[327103]: DNS service limited to local subnets Oct 5 06:04:09 localhost dnsmasq[327103]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:09 localhost dnsmasq[327103]: warning: no upstream servers configured Oct 5 06:04:09 localhost dnsmasq-dhcp[327103]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:04:09 localhost dnsmasq[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/addn_hosts - 0 addresses Oct 5 06:04:09 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/host Oct 5 06:04:09 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/opts Oct 5 06:04:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:09.558 272040 INFO neutron.agent.dhcp.agent [None req-258cf0e5-41ed-4a73-ac12-0b7c3cd840e3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:08Z, description=, device_id=e15d599b-9378-494c-bff9-5d8253ea976d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f676645f-5ff9-41e3-863f-57e563c92133, ip_allocation=immediate, mac_address=fa:16:3e:d6:d6:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:05Z, description=, dns_domain=, id=33508787-158e-4c59-b509-bc8b1c234abc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-297854889, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43244, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=939, status=ACTIVE, subnets=['17475d3f-8a97-4363-a224-cd4ce2ab61ef'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:07Z, vlan_transparent=None, network_id=33508787-158e-4c59-b509-bc8b1c234abc, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=955, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:08Z on network 33508787-158e-4c59-b509-bc8b1c234abc#033[00m Oct 5 06:04:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:09.623 272040 INFO neutron.agent.dhcp.agent [None req-af6d6642-fecf-4cc4-8acf-d0a4452bfa2f - - - - - -] DHCP configuration for ports {'cf2cc340-1df2-41ba-9006-99bcedab74d0'} is completed#033[00m Oct 5 06:04:09 localhost dnsmasq[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/addn_hosts - 1 addresses Oct 5 06:04:09 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/host Oct 5 06:04:09 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/opts Oct 5 06:04:09 localhost podman[327123]: 2025-10-05 10:04:09.771418733 +0000 UTC m=+0.071334019 container kill 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:09.932 272040 INFO neutron.agent.dhcp.agent [None req-258cf0e5-41ed-4a73-ac12-0b7c3cd840e3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:08Z, description=, device_id=e15d599b-9378-494c-bff9-5d8253ea976d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f676645f-5ff9-41e3-863f-57e563c92133, ip_allocation=immediate, mac_address=fa:16:3e:d6:d6:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:05Z, description=, dns_domain=, id=33508787-158e-4c59-b509-bc8b1c234abc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-297854889, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43244, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=939, status=ACTIVE, subnets=['17475d3f-8a97-4363-a224-cd4ce2ab61ef'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:07Z, vlan_transparent=None, network_id=33508787-158e-4c59-b509-bc8b1c234abc, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=955, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:08Z on network 33508787-158e-4c59-b509-bc8b1c234abc#033[00m Oct 5 06:04:10 localhost dnsmasq[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/addn_hosts - 1 addresses Oct 5 06:04:10 localhost podman[327161]: 2025-10-05 10:04:10.140198548 +0000 UTC m=+0.059583609 container kill 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:04:10 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/host Oct 5 06:04:10 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/opts Oct 5 06:04:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:10.226 272040 INFO neutron.agent.dhcp.agent [None req-794572a1-8d4a-4e0f-b9b3-a0207163c62c - - - - - -] DHCP configuration for ports {'f676645f-5ff9-41e3-863f-57e563c92133'} is completed#033[00m Oct 5 06:04:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:10.419 272040 INFO neutron.agent.dhcp.agent [None req-f0d448b7-cba4-45ee-a92e-ad890e42cfe3 - - - - - -] DHCP configuration for ports {'f676645f-5ff9-41e3-863f-57e563c92133'} is completed#033[00m Oct 5 06:04:10 localhost nova_compute[297021]: 2025-10-05 10:04:10.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:10 localhost nova_compute[297021]: 2025-10-05 10:04:10.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:04:11 localhost nova_compute[297021]: 2025-10-05 10:04:11.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:04:11 localhost podman[327182]: 2025-10-05 10:04:11.685642349 +0000 UTC m=+0.092119033 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:04:11 localhost podman[327182]: 2025-10-05 10:04:11.725779338 +0000 UTC m=+0.132256012 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:04:11 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:04:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:12.008 272040 INFO neutron.agent.linux.ip_lib [None req-b8bc1b67-b8f8-46f2-9652-5e358c3ed919 - - - - - -] Device tap509c26be-b6 cannot be used as it has no MAC address#033[00m Oct 5 06:04:12 localhost nova_compute[297021]: 2025-10-05 10:04:12.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:12 localhost kernel: device tap509c26be-b6 entered promiscuous mode Oct 5 06:04:12 localhost NetworkManager[5981]: [1759658652.0726] manager: (tap509c26be-b6): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Oct 5 06:04:12 localhost ovn_controller[157794]: 2025-10-05T10:04:12Z|00167|binding|INFO|Claiming lport 509c26be-b6b5-4e80-8247-577851981071 for this chassis. Oct 5 06:04:12 localhost ovn_controller[157794]: 2025-10-05T10:04:12Z|00168|binding|INFO|509c26be-b6b5-4e80-8247-577851981071: Claiming unknown Oct 5 06:04:12 localhost nova_compute[297021]: 2025-10-05 10:04:12.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:12 localhost systemd-udevd[327215]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:12.082 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-ff2291f8-b7af-48d7-915b-7a5d2cf0724d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff2291f8-b7af-48d7-915b-7a5d2cf0724d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff80b93002a40fda33dc9fbdac9814e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=748cb192-d1b6-46b2-b68f-4d1146af4e5b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=509c26be-b6b5-4e80-8247-577851981071) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:12.084 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 509c26be-b6b5-4e80-8247-577851981071 in datapath ff2291f8-b7af-48d7-915b-7a5d2cf0724d bound to our chassis#033[00m Oct 5 06:04:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:12.085 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ff2291f8-b7af-48d7-915b-7a5d2cf0724d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:12.087 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[444bbd35-d744-47ec-9e6e-ad45ff9226a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost ovn_controller[157794]: 2025-10-05T10:04:12Z|00169|binding|INFO|Setting lport 509c26be-b6b5-4e80-8247-577851981071 ovn-installed in OVS Oct 5 06:04:12 localhost ovn_controller[157794]: 2025-10-05T10:04:12Z|00170|binding|INFO|Setting lport 509c26be-b6b5-4e80-8247-577851981071 up in Southbound Oct 5 06:04:12 localhost nova_compute[297021]: 2025-10-05 10:04:12.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost journal[237931]: ethtool ioctl error on tap509c26be-b6: No such device Oct 5 06:04:12 localhost nova_compute[297021]: 2025-10-05 10:04:12.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:12 localhost nova_compute[297021]: 2025-10-05 10:04:12.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:13 localhost podman[327286]: Oct 5 06:04:13 localhost podman[327286]: 2025-10-05 10:04:13.171422649 +0000 UTC m=+0.093239653 container create 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:04:13 localhost systemd[1]: Started libpod-conmon-067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c.scope. Oct 5 06:04:13 localhost podman[327286]: 2025-10-05 10:04:13.127231289 +0000 UTC m=+0.049048313 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:13 localhost systemd[1]: tmp-crun.npHWvz.mount: Deactivated successfully. Oct 5 06:04:13 localhost systemd[1]: Started libcrun container. Oct 5 06:04:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b1a497ccf58acf3cdf08dcafe8490bce0f494c4d9f89ef2eec7017cc20bd40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:13 localhost podman[327286]: 2025-10-05 10:04:13.269371639 +0000 UTC m=+0.191188633 container init 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:04:13 localhost podman[327286]: 2025-10-05 10:04:13.279274488 +0000 UTC m=+0.201091482 container start 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:13 localhost dnsmasq[327304]: started, version 2.85 cachesize 150 Oct 5 06:04:13 localhost dnsmasq[327304]: DNS service limited to local subnets Oct 5 06:04:13 localhost dnsmasq[327304]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:13 localhost dnsmasq[327304]: warning: no upstream servers configured Oct 5 06:04:13 localhost dnsmasq-dhcp[327304]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:04:13 localhost dnsmasq[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/addn_hosts - 0 addresses Oct 5 06:04:13 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/host Oct 5 06:04:13 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/opts Oct 5 06:04:13 localhost nova_compute[297021]: 2025-10-05 10:04:13.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:13 localhost nova_compute[297021]: 2025-10-05 10:04:13.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:13 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:13.494 272040 INFO neutron.agent.dhcp.agent [None req-9c9e841d-bcd1-49a0-84ca-2ae4e2b1c595 - - - - - -] DHCP configuration for ports {'f8f1a12f-1999-442c-a963-d2014aa9d2f1'} is completed#033[00m Oct 5 06:04:14 localhost nova_compute[297021]: 2025-10-05 10:04:14.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.445 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.445 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.446 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.446 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.446 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:04:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:04:15 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1355488034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.856 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.924 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:04:15 localhost nova_compute[297021]: 2025-10-05 10:04:15.924 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.136 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.138 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11223MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.138 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.139 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.214 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.215 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.215 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.271 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:04:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3745425265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.721 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.728 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.742 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.767 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:04:16 localhost nova_compute[297021]: 2025-10-05 10:04:16.768 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:04:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:04:17 localhost podman[327354]: 2025-10-05 10:04:17.007599541 +0000 UTC m=+0.085657718 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:04:17 localhost podman[327354]: 2025-10-05 10:04:17.01755258 +0000 UTC m=+0.095610757 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:04:17 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:04:17 localhost dnsmasq[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/addn_hosts - 0 addresses Oct 5 06:04:17 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/host Oct 5 06:04:17 localhost podman[327391]: 2025-10-05 10:04:17.136495951 +0000 UTC m=+0.064362339 container kill 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:17 localhost dnsmasq-dhcp[327103]: read /var/lib/neutron/dhcp/33508787-158e-4c59-b509-bc8b1c234abc/opts Oct 5 06:04:17 localhost ovn_controller[157794]: 2025-10-05T10:04:17Z|00171|binding|INFO|Releasing lport 0fb63172-b039-41e4-9c54-319cacec4461 from this chassis (sb_readonly=0) Oct 5 06:04:17 localhost ovn_controller[157794]: 2025-10-05T10:04:17Z|00172|binding|INFO|Setting lport 0fb63172-b039-41e4-9c54-319cacec4461 down in Southbound Oct 5 06:04:17 localhost nova_compute[297021]: 2025-10-05 10:04:17.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:17 localhost kernel: device tap0fb63172-b0 left promiscuous mode Oct 5 06:04:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:17.344 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-33508787-158e-4c59-b509-bc8b1c234abc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33508787-158e-4c59-b509-bc8b1c234abc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9caf4e3-e7d0-4e19-ae41-35d7fa5fe9d8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0fb63172-b039-41e4-9c54-319cacec4461) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:17.347 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 0fb63172-b039-41e4-9c54-319cacec4461 in datapath 33508787-158e-4c59-b509-bc8b1c234abc unbound from our chassis#033[00m Oct 5 06:04:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:17.349 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 33508787-158e-4c59-b509-bc8b1c234abc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:17.350 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ee9d70-cd1d-42b5-8a82-1dca7fee1146]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:17 localhost nova_compute[297021]: 2025-10-05 10:04:17.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:17 localhost nova_compute[297021]: 2025-10-05 10:04:17.769 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:17 localhost nova_compute[297021]: 2025-10-05 10:04:17.770 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:18 localhost nova_compute[297021]: 2025-10-05 10:04:18.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.726924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658726992, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 392, "num_deletes": 251, "total_data_size": 131627, "memory_usage": 138648, "flush_reason": "Manual Compaction"} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658730726, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 128573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27232, "largest_seqno": 27623, "table_properties": {"data_size": 126330, "index_size": 354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6044, "raw_average_key_size": 19, "raw_value_size": 121771, "raw_average_value_size": 394, "num_data_blocks": 16, "num_entries": 309, "num_filter_entries": 309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658646, "oldest_key_time": 1759658646, "file_creation_time": 1759658658, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3835 microseconds, and 1271 cpu microseconds. Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.730776) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 128573 bytes OK Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.730797) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.733212) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.733238) EVENT_LOG_v1 {"time_micros": 1759658658733231, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.733261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 129112, prev total WAL file size 129112, number of live WAL files 2. Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.734002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(125KB)], [48(17MB)] Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658734072, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 18612056, "oldest_snapshot_seqno": -1} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 12348 keys, 16071031 bytes, temperature: kUnknown Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658818095, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 16071031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16000842, "index_size": 38216, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30917, "raw_key_size": 334007, "raw_average_key_size": 27, "raw_value_size": 15790782, "raw_average_value_size": 1278, "num_data_blocks": 1433, "num_entries": 12348, "num_filter_entries": 12348, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658658, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.818376) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 16071031 bytes Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.823313) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.3 rd, 191.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 17.6 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(269.8) write-amplify(125.0) OK, records in: 12863, records dropped: 515 output_compression: NoCompression Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.823343) EVENT_LOG_v1 {"time_micros": 1759658658823330, "job": 28, "event": "compaction_finished", "compaction_time_micros": 84109, "compaction_time_cpu_micros": 45101, "output_level": 6, "num_output_files": 1, "total_output_size": 16071031, "num_input_records": 12863, "num_output_records": 12348, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658823555, "job": 28, "event": "table_file_deletion", "file_number": 50} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658825995, "job": 28, "event": "table_file_deletion", "file_number": 48} Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.733845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.826064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.826074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.826077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.826080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:04:18.826083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:04:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3636008777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:04:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:04:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3636008777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:04:19 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:19.431 2 INFO neutron.agent.securitygroups_rpc [None req-b873fce1-f809-4b34-a034-ad5a5a62539e 39f5838e84b5470ca86bd1fe4d24e208 3f4120f15a704a6bbf6e983fddbe14b0 - - default default] Security group member updated ['0f68a87a-4623-4f46-8cab-ade6cefe7174']#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.594 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.595 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.596 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:04:19 localhost nova_compute[297021]: 2025-10-05 10:04:19.597 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:04:19 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:19.863 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:19Z, description=, device_id=3594095b-1838-4628-b147-c16cee0fd774, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=893016be-2f31-4c6b-88b6-d083a3c966b4, ip_allocation=immediate, mac_address=fa:16:3e:6f:b4:0f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:09Z, description=, dns_domain=, id=ff2291f8-b7af-48d7-915b-7a5d2cf0724d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1346297016-network, port_security_enabled=True, project_id=eff80b93002a40fda33dc9fbdac9814e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58713, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=975, status=ACTIVE, subnets=['88c4c7e4-d1b6-4350-bff7-2bb543ae29e6'], tags=[], tenant_id=eff80b93002a40fda33dc9fbdac9814e, updated_at=2025-10-05T10:04:10Z, vlan_transparent=None, network_id=ff2291f8-b7af-48d7-915b-7a5d2cf0724d, port_security_enabled=False, project_id=eff80b93002a40fda33dc9fbdac9814e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1025, status=DOWN, tags=[], tenant_id=eff80b93002a40fda33dc9fbdac9814e, updated_at=2025-10-05T10:04:19Z on network ff2291f8-b7af-48d7-915b-7a5d2cf0724d#033[00m Oct 5 06:04:20 localhost dnsmasq[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/addn_hosts - 1 addresses Oct 5 06:04:20 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/host Oct 5 06:04:20 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/opts Oct 5 06:04:20 localhost podman[327432]: 2025-10-05 10:04:20.068186899 +0000 UTC m=+0.050158053 container kill 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:04:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:20.323 272040 INFO neutron.agent.dhcp.agent [None req-867cac02-579d-4862-9baa-a67608372e01 - - - - - -] DHCP configuration for ports {'893016be-2f31-4c6b-88b6-d083a3c966b4'} is completed#033[00m Oct 5 06:04:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:20.466 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:04:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:20.467 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:04:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:20.467 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:04:20 localhost dnsmasq[327103]: exiting on receipt of SIGTERM Oct 5 06:04:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:20.528 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:19Z, description=, device_id=3594095b-1838-4628-b147-c16cee0fd774, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=893016be-2f31-4c6b-88b6-d083a3c966b4, ip_allocation=immediate, mac_address=fa:16:3e:6f:b4:0f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:09Z, description=, dns_domain=, id=ff2291f8-b7af-48d7-915b-7a5d2cf0724d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1346297016-network, port_security_enabled=True, project_id=eff80b93002a40fda33dc9fbdac9814e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58713, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=975, status=ACTIVE, subnets=['88c4c7e4-d1b6-4350-bff7-2bb543ae29e6'], tags=[], tenant_id=eff80b93002a40fda33dc9fbdac9814e, updated_at=2025-10-05T10:04:10Z, vlan_transparent=None, network_id=ff2291f8-b7af-48d7-915b-7a5d2cf0724d, port_security_enabled=False, project_id=eff80b93002a40fda33dc9fbdac9814e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1025, status=DOWN, tags=[], tenant_id=eff80b93002a40fda33dc9fbdac9814e, updated_at=2025-10-05T10:04:19Z on network ff2291f8-b7af-48d7-915b-7a5d2cf0724d#033[00m Oct 5 06:04:20 localhost podman[327470]: 2025-10-05 10:04:20.528697766 +0000 UTC m=+0.057356659 container kill 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:20 localhost systemd[1]: libpod-1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8.scope: Deactivated successfully. Oct 5 06:04:20 localhost podman[327483]: 2025-10-05 10:04:20.596789855 +0000 UTC m=+0.049164547 container died 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:20 localhost systemd[1]: tmp-crun.ZiqcEH.mount: Deactivated successfully. Oct 5 06:04:20 localhost podman[327483]: 2025-10-05 10:04:20.690784987 +0000 UTC m=+0.143159689 container cleanup 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:04:20 localhost systemd[1]: libpod-conmon-1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8.scope: Deactivated successfully. Oct 5 06:04:20 localhost podman[327485]: 2025-10-05 10:04:20.716176737 +0000 UTC m=+0.157517779 container remove 1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33508787-158e-4c59-b509-bc8b1c234abc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:04:20 localhost dnsmasq[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/addn_hosts - 1 addresses Oct 5 06:04:20 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/host Oct 5 06:04:20 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/opts Oct 5 06:04:20 localhost podman[327523]: 2025-10-05 10:04:20.778909401 +0000 UTC m=+0.067107274 container kill 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:04:20 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:20.836 2 INFO neutron.agent.securitygroups_rpc [None req-fc2f7fbf-79da-4814-8711-a9ff97a2ad28 39f5838e84b5470ca86bd1fe4d24e208 3f4120f15a704a6bbf6e983fddbe14b0 - - default default] Security group member updated ['0f68a87a-4623-4f46-8cab-ade6cefe7174']#033[00m Oct 5 06:04:20 localhost podman[327543]: 2025-10-05 10:04:20.920207418 +0000 UTC m=+0.078667217 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:04:20 localhost podman[327543]: 2025-10-05 10:04:20.935860173 +0000 UTC m=+0.094320002 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:04:20 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:04:20 localhost podman[327544]: 2025-10-05 10:04:20.997758274 +0000 UTC m=+0.151135205 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:04:21 localhost podman[327544]: 2025-10-05 10:04:21.015945969 +0000 UTC m=+0.169322930 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd) Oct 5 06:04:21 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:21.016 272040 INFO neutron.agent.dhcp.agent [None req-7c487e1b-beba-4a62-81a4-5044af762535 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:21 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:04:21 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:21.064 272040 INFO neutron.agent.dhcp.agent [None req-b78ba6c4-64ed-4a80-810e-85bb25067d7c - - - - - -] DHCP configuration for ports {'893016be-2f31-4c6b-88b6-d083a3c966b4'} is completed#033[00m Oct 5 06:04:21 localhost systemd[1]: var-lib-containers-storage-overlay-72cb480ca32e788a64b0e324ca5462f3723ca897a6f72492471b313f9d9f6789-merged.mount: Deactivated successfully. Oct 5 06:04:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1545c1ee561a6f09d0271101aeda292a5afaa5057e74c034250141a31dc4b2c8-userdata-shm.mount: Deactivated successfully. Oct 5 06:04:21 localhost systemd[1]: run-netns-qdhcp\x2d33508787\x2d158e\x2d4c59\x2db509\x2dbc8b1c234abc.mount: Deactivated successfully. Oct 5 06:04:21 localhost nova_compute[297021]: 2025-10-05 10:04:21.155 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:04:21 localhost nova_compute[297021]: 2025-10-05 10:04:21.173 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:04:21 localhost nova_compute[297021]: 2025-10-05 10:04:21.173 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:04:21 localhost podman[248506]: time="2025-10-05T10:04:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:04:21 localhost podman[248506]: @ - - [05/Oct/2025:10:04:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:04:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:21 localhost podman[248506]: @ - - [05/Oct/2025:10:04:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19838 "" "Go-http-client/1.1" Oct 5 06:04:22 localhost openstack_network_exporter[250601]: ERROR 10:04:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:04:22 localhost openstack_network_exporter[250601]: ERROR 10:04:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:22 localhost openstack_network_exporter[250601]: ERROR 10:04:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:22 localhost openstack_network_exporter[250601]: ERROR 10:04:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:04:22 localhost openstack_network_exporter[250601]: Oct 5 06:04:22 localhost openstack_network_exporter[250601]: ERROR 10:04:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:04:22 localhost openstack_network_exporter[250601]: Oct 5 06:04:22 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:22.768 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:23 localhost ovn_controller[157794]: 2025-10-05T10:04:23Z|00173|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:23 localhost nova_compute[297021]: 2025-10-05 10:04:23.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:24 localhost nova_compute[297021]: 2025-10-05 10:04:24.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:24.217 2 INFO neutron.agent.securitygroups_rpc [req-803059a0-e8ae-4dd9-b37f-165f2f3b99c6 req-91605528-ec08-4e30-8480-afd66ad061b2 47b0a607769d444e821972981f90739d eff80b93002a40fda33dc9fbdac9814e - - default default] Security group rule updated ['e6485e38-61f1-4967-b607-1efac3e82095']#033[00m Oct 5 06:04:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:04:25 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:04:25 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:25.367 2 INFO neutron.agent.securitygroups_rpc [None req-3f129a0c-3040-422c-9549-909e704a7a54 07c064cb999141c9a1e10d6cd219806f 318dd9dd1a494c039b49e420f4b0eccb - - default default] Security group member updated ['58cfae10-a4b4-45dd-9a1d-adbcaacaf651']#033[00m Oct 5 06:04:25 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:25.443 2 INFO neutron.agent.securitygroups_rpc [req-12bf250b-5e4a-4d0a-bf15-cb3b3b550047 req-29dda7fe-696c-4e16-a348-1f3552936e79 47b0a607769d444e821972981f90739d eff80b93002a40fda33dc9fbdac9814e - - default default] Security group rule updated ['e6485e38-61f1-4967-b607-1efac3e82095']#033[00m Oct 5 06:04:25 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:04:25 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:04:26 localhost nova_compute[297021]: 2025-10-05 10:04:26.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:26 localhost nova_compute[297021]: 2025-10-05 10:04:26.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:04:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:04:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:04:28 localhost podman[327672]: 2025-10-05 10:04:28.673480729 +0000 UTC m=+0.075595444 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent) Oct 5 06:04:28 localhost podman[327672]: 2025-10-05 10:04:28.679764299 +0000 UTC m=+0.081878994 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:28 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:04:28 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:28.971 2 INFO neutron.agent.securitygroups_rpc [None req-7923a666-f93e-4612-b396-936fcbeb38e1 07c064cb999141c9a1e10d6cd219806f 318dd9dd1a494c039b49e420f4b0eccb - - default default] Security group member updated ['58cfae10-a4b4-45dd-9a1d-adbcaacaf651']#033[00m Oct 5 06:04:29 localhost nova_compute[297021]: 2025-10-05 10:04:29.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:30 localhost ovn_controller[157794]: 2025-10-05T10:04:30Z|00174|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:30 localhost nova_compute[297021]: 2025-10-05 10:04:30.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:31 localhost podman[327706]: 2025-10-05 10:04:31.864625552 +0000 UTC m=+0.059886517 container kill 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:31 localhost dnsmasq[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/addn_hosts - 0 addresses Oct 5 06:04:31 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/host Oct 5 06:04:31 localhost dnsmasq-dhcp[327304]: read /var/lib/neutron/dhcp/ff2291f8-b7af-48d7-915b-7a5d2cf0724d/opts Oct 5 06:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:04:31 localhost systemd[1]: tmp-crun.0WGehA.mount: Deactivated successfully. Oct 5 06:04:31 localhost podman[327720]: 2025-10-05 10:04:31.993498282 +0000 UTC m=+0.093511081 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible) Oct 5 06:04:32 localhost podman[327720]: 2025-10-05 10:04:32.033975171 +0000 UTC m=+0.133987970 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:32 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:04:32 localhost ovn_controller[157794]: 2025-10-05T10:04:32Z|00175|binding|INFO|Releasing lport 509c26be-b6b5-4e80-8247-577851981071 from this chassis (sb_readonly=0) Oct 5 06:04:32 localhost nova_compute[297021]: 2025-10-05 10:04:32.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:32 localhost kernel: device tap509c26be-b6 left promiscuous mode Oct 5 06:04:32 localhost ovn_controller[157794]: 2025-10-05T10:04:32Z|00176|binding|INFO|Setting lport 509c26be-b6b5-4e80-8247-577851981071 down in Southbound Oct 5 06:04:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:32.192 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-ff2291f8-b7af-48d7-915b-7a5d2cf0724d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff2291f8-b7af-48d7-915b-7a5d2cf0724d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff80b93002a40fda33dc9fbdac9814e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=748cb192-d1b6-46b2-b68f-4d1146af4e5b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=509c26be-b6b5-4e80-8247-577851981071) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:32.194 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 509c26be-b6b5-4e80-8247-577851981071 in datapath ff2291f8-b7af-48d7-915b-7a5d2cf0724d unbound from our chassis#033[00m Oct 5 06:04:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:32.197 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff2291f8-b7af-48d7-915b-7a5d2cf0724d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:04:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:32.198 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ab014d48-2289-4b19-8e50-90165ce5a2f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:32 localhost nova_compute[297021]: 2025-10-05 10:04:32.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:32 localhost ovn_controller[157794]: 2025-10-05T10:04:32Z|00177|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:32 localhost nova_compute[297021]: 2025-10-05 10:04:32.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:32 localhost sshd[327753]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:34.780 272040 INFO neutron.agent.linux.ip_lib [None req-68f12031-bd5a-467c-90f4-362b3abd731c - - - - - -] Device tap5a573953-3d cannot be used as it has no MAC address#033[00m Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost kernel: device tap5a573953-3d entered promiscuous mode Oct 5 06:04:34 localhost NetworkManager[5981]: [1759658674.8162] manager: (tap5a573953-3d): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Oct 5 06:04:34 localhost ovn_controller[157794]: 2025-10-05T10:04:34Z|00178|binding|INFO|Claiming lport 5a573953-3d02-4219-a68d-f85478cdc96f for this chassis. Oct 5 06:04:34 localhost ovn_controller[157794]: 2025-10-05T10:04:34Z|00179|binding|INFO|5a573953-3d02-4219-a68d-f85478cdc96f: Claiming unknown Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost systemd-udevd[327765]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:34.833 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9b4adbf-9887-42cb-9d34-ac326569c59f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5a573953-3d02-4219-a68d-f85478cdc96f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:34.835 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 5a573953-3d02-4219-a68d-f85478cdc96f in datapath bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2 bound to our chassis#033[00m Oct 5 06:04:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:34.837 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:34 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:34.838 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[af56e414-2981-4883-a977-a2f096a3b410]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:34 localhost ovn_controller[157794]: 2025-10-05T10:04:34Z|00180|binding|INFO|Setting lport 5a573953-3d02-4219-a68d-f85478cdc96f ovn-installed in OVS Oct 5 06:04:34 localhost ovn_controller[157794]: 2025-10-05T10:04:34Z|00181|binding|INFO|Setting lport 5a573953-3d02-4219-a68d-f85478cdc96f up in Southbound Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost journal[237931]: ethtool ioctl error on tap5a573953-3d: No such device Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost ovn_controller[157794]: 2025-10-05T10:04:34Z|00182|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:34 localhost podman[327773]: 2025-10-05 10:04:34.9815511 +0000 UTC m=+0.107892082 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:04:34 localhost nova_compute[297021]: 2025-10-05 10:04:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost podman[327773]: 2025-10-05 10:04:34.990253316 +0000 UTC m=+0.116594348 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:04:35 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:04:35 localhost nova_compute[297021]: 2025-10-05 10:04:35.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:35 localhost systemd[1]: tmp-crun.tS7KmO.mount: Deactivated successfully. Oct 5 06:04:35 localhost dnsmasq[327304]: exiting on receipt of SIGTERM Oct 5 06:04:35 localhost podman[327851]: 2025-10-05 10:04:35.569690332 +0000 UTC m=+0.077598218 container kill 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:35 localhost systemd[1]: libpod-067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c.scope: Deactivated successfully. Oct 5 06:04:35 localhost podman[327866]: 2025-10-05 10:04:35.647145445 +0000 UTC m=+0.063622368 container died 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:35 localhost podman[327866]: 2025-10-05 10:04:35.678662582 +0000 UTC m=+0.095139465 container cleanup 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:35 localhost systemd[1]: libpod-conmon-067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c.scope: Deactivated successfully. Oct 5 06:04:35 localhost systemd[1]: var-lib-containers-storage-overlay-14b1a497ccf58acf3cdf08dcafe8490bce0f494c4d9f89ef2eec7017cc20bd40-merged.mount: Deactivated successfully. Oct 5 06:04:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c-userdata-shm.mount: Deactivated successfully. Oct 5 06:04:35 localhost podman[327869]: 2025-10-05 10:04:35.727555569 +0000 UTC m=+0.135358406 container remove 067f309d94ccdff72c4bba083dc80d1080d2c807bb8b998239a94bc06cdd390c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff2291f8-b7af-48d7-915b-7a5d2cf0724d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:35 localhost podman[327914]: Oct 5 06:04:35 localhost podman[327914]: 2025-10-05 10:04:35.892503729 +0000 UTC m=+0.093025917 container create 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:35 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:35.894 272040 INFO neutron.agent.dhcp.agent [None req-08d34e5f-9598-42a4-a03e-3ec94e43a9d8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:35 localhost systemd[1]: run-netns-qdhcp\x2dff2291f8\x2db7af\x2d48d7\x2d915b\x2d7a5d2cf0724d.mount: Deactivated successfully. Oct 5 06:04:35 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:35.896 272040 INFO neutron.agent.dhcp.agent [None req-08d34e5f-9598-42a4-a03e-3ec94e43a9d8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:35 localhost systemd[1]: Started libpod-conmon-7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b.scope. Oct 5 06:04:35 localhost podman[327914]: 2025-10-05 10:04:35.848600066 +0000 UTC m=+0.049122284 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:35 localhost systemd[1]: Started libcrun container. Oct 5 06:04:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80ed881390865ac2761f188b91744294df41b8f7028e9a3b9af206cfab4e19a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:35 localhost podman[327914]: 2025-10-05 10:04:35.979499362 +0000 UTC m=+0.180021560 container init 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:35 localhost podman[327914]: 2025-10-05 10:04:35.988453725 +0000 UTC m=+0.188975923 container start 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:04:35 localhost dnsmasq[327933]: started, version 2.85 cachesize 150 Oct 5 06:04:35 localhost dnsmasq[327933]: DNS service limited to local subnets Oct 5 06:04:35 localhost dnsmasq[327933]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:35 localhost dnsmasq[327933]: warning: no upstream servers configured Oct 5 06:04:35 localhost dnsmasq-dhcp[327933]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:04:36 localhost dnsmasq[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/addn_hosts - 0 addresses Oct 5 06:04:36 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/host Oct 5 06:04:36 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/opts Oct 5 06:04:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:36.051 272040 INFO neutron.agent.dhcp.agent [None req-68f12031-bd5a-467c-90f4-362b3abd731c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:34Z, description=, device_id=759e739c-dc88-479b-90fd-bbbfde581626, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c815f026-056b-42b3-acdc-68efc9a2c669, ip_allocation=immediate, mac_address=fa:16:3e:64:e9:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:32Z, description=, dns_domain=, id=bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-434129732, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42533, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1096, status=ACTIVE, subnets=['da0bfec6-9a45-4956-915e-f7ddf6efb01d'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:33Z, vlan_transparent=None, network_id=bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1117, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:35Z on network bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2#033[00m Oct 5 06:04:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:36.099 272040 INFO neutron.agent.dhcp.agent [None req-c0e15afb-eea0-4c6e-a2a8-07ee7975503c - - - - - -] DHCP configuration for ports {'14595e04-f24a-4fa6-9477-264ed958a02b'} is completed#033[00m Oct 5 06:04:36 localhost podman[327952]: 2025-10-05 10:04:36.245199138 +0000 UTC m=+0.051039437 container kill 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:04:36 localhost dnsmasq[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/addn_hosts - 1 addresses Oct 5 06:04:36 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/host Oct 5 06:04:36 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/opts Oct 5 06:04:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:36.539 272040 INFO neutron.agent.dhcp.agent [None req-6b94559f-e538-4dca-a6d9-5b949f2be10a - - - - - -] DHCP configuration for ports {'c815f026-056b-42b3-acdc-68efc9a2c669'} is completed#033[00m Oct 5 06:04:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:36.732 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:37.520 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:34Z, description=, device_id=759e739c-dc88-479b-90fd-bbbfde581626, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c815f026-056b-42b3-acdc-68efc9a2c669, ip_allocation=immediate, mac_address=fa:16:3e:64:e9:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:32Z, description=, dns_domain=, id=bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-434129732, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42533, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1096, status=ACTIVE, subnets=['da0bfec6-9a45-4956-915e-f7ddf6efb01d'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:33Z, vlan_transparent=None, network_id=bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1117, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:35Z on network bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2#033[00m Oct 5 06:04:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:04:37 localhost podman[327979]: 2025-10-05 10:04:37.729648752 +0000 UTC m=+0.135759909 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git) Oct 5 06:04:37 localhost systemd[1]: tmp-crun.c2pHmT.mount: Deactivated successfully. Oct 5 06:04:37 localhost podman[328002]: 2025-10-05 10:04:37.753654813 +0000 UTC m=+0.082550332 container kill 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:37 localhost dnsmasq[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/addn_hosts - 1 addresses Oct 5 06:04:37 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/host Oct 5 06:04:37 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/opts Oct 5 06:04:37 localhost podman[327979]: 2025-10-05 10:04:37.810749134 +0000 UTC m=+0.216860291 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, config_id=edpm) Oct 5 06:04:37 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:04:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:38.010 272040 INFO neutron.agent.dhcp.agent [None req-707da562-b1bf-47f4-81f9-f478215bc0b6 - - - - - -] DHCP configuration for ports {'c815f026-056b-42b3-acdc-68efc9a2c669'} is completed#033[00m Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.839 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.840 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.840 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.852 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.853 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2f2e6b16-34bb-42c4-97f8-8cea7a7a80de', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.840641', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b44b7d30-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.064556785, 'message_signature': '4f0406a113886d6a211da46ca21f4bfa3ce31528142b800d83fb17debe6ebba7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.840641', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b44b9612-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.064556785, 'message_signature': '83ddcfde0a025670a6500a9198c60b0f9fd557602c50c4a6bc9ef4265abd41d3'}]}, 'timestamp': '2025-10-05 10:04:38.854528', '_unique_id': 'c1c21eaf9a184b28a52715cb66b74cf4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.856 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.858 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.858 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.858 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'abdec47c-b96f-4694-9495-ee221a3c2e19', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.858461', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b44c46c0-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.064556785, 'message_signature': 'a8bd26800d41539dd8b69d0354f9e2d7ae1b3ec708ea53c45111eb8fe1fe8930'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.858461', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b44c5822-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.064556785, 'message_signature': '69b526a4453399a7600d7e29ac5d3165f104165aa24ce30011a0267215e82d50'}]}, 'timestamp': '2025-10-05 10:04:38.859389', '_unique_id': 'f35bd46fcc0d4a2cb24355542321680c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.860 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.861 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.865 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4190ec7a-2939-4e78-b13c-16b7e7dd12e6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.862019', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b44d665e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': '97342c0b1625dde5c5175df62d327ca729b213434e1c0d96f8c08a0b75d6f50d'}]}, 'timestamp': '2025-10-05 10:04:38.866513', '_unique_id': '9f796a188984493aab84b1a8ee66cc9f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.867 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.869 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.870 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.889 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '779defa5-8877-4a83-8442-6160112c078d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.870176', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b450ea7c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '442e7fb24600ceac149a347d00bf1f63d578bed23678fc889e940e86ca1740d3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.870176', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b450ff4e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '4ee29585bd7d9acbf5d404b04a6c2411ae25ea531eb7f065ae5e1327cd350497'}]}, 'timestamp': '2025-10-05 10:04:38.889879', '_unique_id': '44dd5b78117e4d92a61b04e2429d9397'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.892 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e1d1d422-cf40-4581-a8ec-9b6003d84b33', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.892546', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b451797e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': 'ed1b0bf464144db55aef31c9a1539d96043ef95c8fd138654b753144f46476cf'}]}, 'timestamp': '2025-10-05 10:04:38.893025', '_unique_id': '867b6db8cdab4619a685eaa1194d9401'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.895 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14cc1e1d-8ea2-489a-bc9e-f0470fc800f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.895165', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b451df04-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '769c715cd1f06ee1a29f77a21a5600101bb3c88389b591248f2abcc3f7a234d9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.895165', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b451f4da-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': 'e23739883023ca60fb9114a5e79f0f8dbd3d1d818ff37facae6ac400d52ca4e4'}]}, 'timestamp': '2025-10-05 10:04:38.896153', '_unique_id': '56a2fd40f4714222a430b5aa0347c764'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.897 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.898 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.898 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9fbd864-f487-44cd-9a39-9325adc56d07', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.898450', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b4525fce-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': 'f2af36605118142f26fd9bdab2c8490a8dc75fbe5fbdd2508c1f5d5598bead3a'}]}, 'timestamp': '2025-10-05 10:04:38.898916', '_unique_id': '675a349aae334f20b03c1c86cda19818'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.900 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.901 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.901 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.901 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.901 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4e496c26-07ac-47a6-8077-4e2023056568', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.901327', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b452d184-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': 'd7337302a326b33154ab84ea9ebb2d6fa0a6df1966d7044fc8efa1802ae75e37'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.901327', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b452e1b0-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '2efc6c1673412a5e5c000c784f39657e5abb3707e08829cf247ac8bb08d1ed8f'}]}, 'timestamp': '2025-10-05 10:04:38.902214', '_unique_id': '196019c79f8340cf9ae6fac4e3b7f6ad'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.904 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.904 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9c5eeb39-6320-45de-b690-007755295e25', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.904421', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b453490c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': 'f3182e2932e78289e48b24df81182981b57000a9ebe152016963d97940fbe5a8'}]}, 'timestamp': '2025-10-05 10:04:38.904885', '_unique_id': 'b7edc97ff0f94c41a4fe4c26b8371108'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.906 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 16270000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01163482-2ed5-42cc-a2a4-02a7303a6c0e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16270000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:04:38.906968', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'b45711a4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.152893554, 'message_signature': '78969600890d21efb49286fbf4440d4d8314b91ba8d46ea5c4d9d8850e9ce8e7'}]}, 'timestamp': '2025-10-05 10:04:38.929686', '_unique_id': 'a2a186c049aa4e0e93755fbd67814cca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.931 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57974fa6-7886-4029-896a-cb9235bea9c9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.931911', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4577b4e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '16f65b7600c3f9b9b4a10952ae272a46858d25e9770d1ca2ddaab4c635f66469'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.931911', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4578f76-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '1acecfe1eeeae3ac1b5003ac51a543968356069b819dd37d81b59342f9635624'}]}, 'timestamp': '2025-10-05 10:04:38.932884', '_unique_id': '2023c47ca64142da8aa80f89cef30b51'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.935 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '165e1fca-f3c2-40b3-a537-acc9fa371fed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.935146', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b457f8f8-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': '543f02b27194c6a874bb8dd5f9360843b7c9bfb30481ff4f57ee568433251828'}]}, 'timestamp': '2025-10-05 10:04:38.935636', '_unique_id': '60d54ba649a041a4bad72bcf4e7b659b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.937 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d7cf953-0de9-40c2-bd4e-f8a32770113a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.938042', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4586aae-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '3d72f7cf7184aac0519e37ea3c1441aa4156205789d732361fe11d9fa6f352eb'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.938042', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4587e90-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': 'ea5eb21c795d3c284e3af133d3929c881cf9991cbf36a292d53394987744f2f4'}]}, 'timestamp': '2025-10-05 10:04:38.939015', '_unique_id': '116e11c8587944609ca4fcfd51da7ee9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7eedced0-93fd-4550-8573-ff15a440259d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.941261', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b458e902-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': '3add68bbd8b7f193ceb087cb027790ebafaad2e9324b46ed3cae44c46fbc9347'}]}, 'timestamp': '2025-10-05 10:04:38.941750', '_unique_id': '9631cabf4fe7489e98c17698b292dfd8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.942 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2602c4c2-e9f3-4de3-bb3b-41b80d40bee8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.944049', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b45954be-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': '3a4c6665d2c34feaf2811c66de5edd07ea71b52e89cffeee2fdfb7221d023b2b'}]}, 'timestamp': '2025-10-05 10:04:38.944542', '_unique_id': '6d55e4633b2c4ba790f3d70006b55cec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.945 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.946 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51baa8b0-1f0c-4b84-8345-b2d585c237fd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.946699', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b459bc1a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': 'b4bab00e03d9b57c4b73b30e3d82cf049c42230e200cdde11a79dbad02ec440c'}]}, 'timestamp': '2025-10-05 10:04:38.947151', '_unique_id': '7703afb60df4464d8753ec1f7f7a8869'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.948 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.949 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.949 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '552a9b27-a38e-4b4d-9dd0-ae2564e26150', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.949463', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b45a2876-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': '33026ddb04f69293c8f3d7a599ded9c2c7d91f495ae092bc430b14ba64453430'}]}, 'timestamp': '2025-10-05 10:04:38.949929', '_unique_id': 'f123985bb137480a8757202ad5327366'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7366bf96-6ff7-4e2d-8a62-2a004c12efcc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:04:38.952030', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'b45a8c62-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.085967337, 'message_signature': 'a91e54b1859882197533c1671dcf94398565b568caebb580e9a508ea45232c4b'}]}, 'timestamp': '2025-10-05 10:04:38.952521', '_unique_id': 'e7fe56d3960b4eac872314e48182fd27'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.954 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.955 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7fd8c558-f629-4a68-93c6-204c38207fe4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.954829', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b45af986-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.064556785, 'message_signature': '0345424736d7987c145163744e90d5a1c86a5cef723c087dfcd55b7691b1a10d'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.954829', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b45b0ade-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.064556785, 'message_signature': '26bca6b64ba916844ad92ba898ceb1454042c8a5f89eb33cdc71fcea0d2f5357'}]}, 'timestamp': '2025-10-05 10:04:38.955692', '_unique_id': '6071f96971cf4472a139f8edc84672c1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.957 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e136ee54-a69d-4ed6-9a11-ab2bbb872604', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:04:38.957829', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'b45b6eac-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.152893554, 'message_signature': 'e7aba283e9d70737531cfd423c159016e4ac351f40eb60240bf9c2f3359f9c7e'}]}, 'timestamp': '2025-10-05 10:04:38.958300', '_unique_id': '8e7e045cc14f4a548b7b3ed486c342eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.959 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.960 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.960 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.961 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '15731e9f-0d39-4cfd-afe4-721ab17221fd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:04:38.960678', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b45bde14-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': 'a974fbccf699b1051ebc58ad03ccd05df1818a6b526c3774403a9c437d02974d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:04:38.960678', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b45bee2c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12158.094131568, 'message_signature': '51653a55191e67255d9909fed5a87259e7933d959b18881e98da29d28973c4bb'}]}, 'timestamp': '2025-10-05 10:04:38.961545', '_unique_id': 'c93f0c7bc3fa4ae7bcb172c5656c8137'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:04:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:04:38.962 12 ERROR oslo_messaging.notify.messaging Oct 5 06:04:39 localhost nova_compute[297021]: 2025-10-05 10:04:39.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:39 localhost nova_compute[297021]: 2025-10-05 10:04:39.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:39 localhost nova_compute[297021]: 2025-10-05 10:04:39.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:39 localhost podman[328050]: 2025-10-05 10:04:39.890621478 +0000 UTC m=+0.060086252 container kill 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:39 localhost dnsmasq[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/addn_hosts - 0 addresses Oct 5 06:04:39 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/host Oct 5 06:04:39 localhost dnsmasq-dhcp[327933]: read /var/lib/neutron/dhcp/bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2/opts Oct 5 06:04:40 localhost ovn_controller[157794]: 2025-10-05T10:04:40Z|00183|binding|INFO|Releasing lport 5a573953-3d02-4219-a68d-f85478cdc96f from this chassis (sb_readonly=0) Oct 5 06:04:40 localhost ovn_controller[157794]: 2025-10-05T10:04:40Z|00184|binding|INFO|Setting lport 5a573953-3d02-4219-a68d-f85478cdc96f down in Southbound Oct 5 06:04:40 localhost kernel: device tap5a573953-3d left promiscuous mode Oct 5 06:04:40 localhost nova_compute[297021]: 2025-10-05 10:04:40.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:40.064 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9b4adbf-9887-42cb-9d34-ac326569c59f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5a573953-3d02-4219-a68d-f85478cdc96f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:40.071 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 5a573953-3d02-4219-a68d-f85478cdc96f in datapath bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2 unbound from our chassis#033[00m Oct 5 06:04:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:40.073 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:40.074 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[bf061946-d16c-46a9-977f-1d3616b9fdc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:40 localhost nova_compute[297021]: 2025-10-05 10:04:40.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:40 localhost ovn_controller[157794]: 2025-10-05T10:04:40Z|00185|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:40 localhost nova_compute[297021]: 2025-10-05 10:04:40.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:41 localhost podman[328088]: 2025-10-05 10:04:41.363475877 +0000 UTC m=+0.062697793 container kill 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:41 localhost dnsmasq[327933]: exiting on receipt of SIGTERM Oct 5 06:04:41 localhost systemd[1]: libpod-7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b.scope: Deactivated successfully. Oct 5 06:04:41 localhost podman[328101]: 2025-10-05 10:04:41.444143958 +0000 UTC m=+0.059089656 container died 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:41 localhost podman[328101]: 2025-10-05 10:04:41.484836383 +0000 UTC m=+0.099782051 container cleanup 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:41 localhost systemd[1]: libpod-conmon-7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b.scope: Deactivated successfully. Oct 5 06:04:41 localhost podman[328102]: 2025-10-05 10:04:41.5285413 +0000 UTC m=+0.140974930 container remove 7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdcc67c2-cf74-4b56-8df6-95b4f0b4e5b2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:41.830 272040 INFO neutron.agent.dhcp.agent [None req-6016cf37-fb13-4051-8d8d-83c5c530189d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:41.831 272040 INFO neutron.agent.dhcp.agent [None req-6016cf37-fb13-4051-8d8d-83c5c530189d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:42 localhost ovn_controller[157794]: 2025-10-05T10:04:42Z|00186|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:04:42 localhost nova_compute[297021]: 2025-10-05 10:04:42.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:42.295 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:04:42 localhost systemd[1]: var-lib-containers-storage-overlay-b80ed881390865ac2761f188b91744294df41b8f7028e9a3b9af206cfab4e19a-merged.mount: Deactivated successfully. Oct 5 06:04:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7758a84c4f49ee04efc2ae7fd7a7dab53ae4dc63e28bba2e45a27649638f0e2b-userdata-shm.mount: Deactivated successfully. Oct 5 06:04:42 localhost systemd[1]: run-netns-qdhcp\x2dbdcc67c2\x2dcf74\x2d4b56\x2d8df6\x2d95b4f0b4e5b2.mount: Deactivated successfully. Oct 5 06:04:42 localhost podman[328128]: 2025-10-05 10:04:42.441194516 +0000 UTC m=+0.098949209 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:04:42 localhost podman[328128]: 2025-10-05 10:04:42.455842793 +0000 UTC m=+0.113597476 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:04:42 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:04:44 localhost nova_compute[297021]: 2025-10-05 10:04:44.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:45.380 272040 INFO neutron.agent.linux.ip_lib [None req-a6361dda-3777-4d8a-958b-eecc02b1838f - - - - - -] Device tapda23fd08-09 cannot be used as it has no MAC address#033[00m Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost kernel: device tapda23fd08-09 entered promiscuous mode Oct 5 06:04:45 localhost NetworkManager[5981]: [1759658685.4168] manager: (tapda23fd08-09): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Oct 5 06:04:45 localhost ovn_controller[157794]: 2025-10-05T10:04:45Z|00187|binding|INFO|Claiming lport da23fd08-0934-4105-a822-89da1affa672 for this chassis. Oct 5 06:04:45 localhost ovn_controller[157794]: 2025-10-05T10:04:45Z|00188|binding|INFO|da23fd08-0934-4105-a822-89da1affa672: Claiming unknown Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost systemd-udevd[328161]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:45.430 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-f9d13ee7-8f20-4535-9510-4427059f352b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9d13ee7-8f20-4535-9510-4427059f352b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb627536c66b4926b63cef7ffc0d33ce', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baea6acf-d1ce-48d4-8fd4-a7d550eb6263, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=da23fd08-0934-4105-a822-89da1affa672) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:45.432 163434 INFO neutron.agent.ovn.metadata.agent [-] Port da23fd08-0934-4105-a822-89da1affa672 in datapath f9d13ee7-8f20-4535-9510-4427059f352b bound to our chassis#033[00m Oct 5 06:04:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:45.434 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f9d13ee7-8f20-4535-9510-4427059f352b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:45.435 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fd0623-7f6f-47c2-a617-544849b511b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:45 localhost ovn_controller[157794]: 2025-10-05T10:04:45Z|00189|binding|INFO|Setting lport da23fd08-0934-4105-a822-89da1affa672 ovn-installed in OVS Oct 5 06:04:45 localhost ovn_controller[157794]: 2025-10-05T10:04:45Z|00190|binding|INFO|Setting lport da23fd08-0934-4105-a822-89da1affa672 up in Southbound Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost journal[237931]: ethtool ioctl error on tapda23fd08-09: No such device Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:45.722 2 INFO neutron.agent.securitygroups_rpc [None req-29641d0e-a015-45b4-a935-7a2349b946b8 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:45 localhost nova_compute[297021]: 2025-10-05 10:04:45.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:46.252 2 INFO neutron.agent.securitygroups_rpc [None req-8f518baf-0a73-42d4-8334-fb31cc8ac4e7 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:46 localhost podman[328232]: Oct 5 06:04:46 localhost podman[328232]: 2025-10-05 10:04:46.454533738 +0000 UTC m=+0.098336591 container create e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:04:46 localhost podman[328232]: 2025-10-05 10:04:46.404097618 +0000 UTC m=+0.047900531 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:46 localhost systemd[1]: Started libpod-conmon-e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9.scope. Oct 5 06:04:46 localhost systemd[1]: Started libcrun container. Oct 5 06:04:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/460b42c23c4aa4eead059ae51ae8150ec908d43e2af89d55567fd9c124f224f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:46 localhost podman[328232]: 2025-10-05 10:04:46.544246615 +0000 UTC m=+0.188049478 container init e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:04:46 localhost podman[328232]: 2025-10-05 10:04:46.553094785 +0000 UTC m=+0.196897638 container start e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:04:46 localhost dnsmasq[328250]: started, version 2.85 cachesize 150 Oct 5 06:04:46 localhost dnsmasq[328250]: DNS service limited to local subnets Oct 5 06:04:46 localhost dnsmasq[328250]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:46 localhost dnsmasq[328250]: warning: no upstream servers configured Oct 5 06:04:46 localhost dnsmasq-dhcp[328250]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:04:46 localhost dnsmasq[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/addn_hosts - 0 addresses Oct 5 06:04:46 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/host Oct 5 06:04:46 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/opts Oct 5 06:04:46 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:46.837 272040 INFO neutron.agent.dhcp.agent [None req-ae7645a6-fd9f-4439-806a-d2dc9849f777 - - - - - -] DHCP configuration for ports {'91d8a13e-ad6a-4091-b67b-71b450664f3b'} is completed#033[00m Oct 5 06:04:47 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:47.137 2 INFO neutron.agent.securitygroups_rpc [None req-287a8dc9-b560-471a-93c2-acb6de35de57 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:04:47 localhost podman[328251]: 2025-10-05 10:04:47.420038739 +0000 UTC m=+0.080769454 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:04:47 localhost podman[328251]: 2025-10-05 10:04:47.433922256 +0000 UTC m=+0.094652971 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:04:47 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:04:49 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:49.088 2 INFO neutron.agent.securitygroups_rpc [None req-097980e1-eded-469f-8d31-7bb2108118fd b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:49 localhost nova_compute[297021]: 2025-10-05 10:04:49.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e107 do_prune osdmap full prune enabled Oct 5 06:04:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e108 e108: 6 total, 6 up, 6 in Oct 5 06:04:49 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e108: 6 total, 6 up, 6 in Oct 5 06:04:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e108 do_prune osdmap full prune enabled Oct 5 06:04:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e109 e109: 6 total, 6 up, 6 in Oct 5 06:04:50 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e109: 6 total, 6 up, 6 in Oct 5 06:04:51 localhost podman[248506]: time="2025-10-05T10:04:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:04:51 localhost podman[248506]: @ - - [05/Oct/2025:10:04:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:04:51 localhost podman[248506]: @ - - [05/Oct/2025:10:04:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19848 "" "Go-http-client/1.1" Oct 5 06:04:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:04:51 localhost podman[328276]: 2025-10-05 10:04:51.693709831 +0000 UTC m=+0.091856795 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:04:51 localhost podman[328276]: 2025-10-05 10:04:51.70986574 +0000 UTC m=+0.108012664 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd) Oct 5 06:04:51 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:04:51 localhost podman[328275]: 2025-10-05 10:04:51.670327036 +0000 UTC m=+0.076764005 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:04:51 localhost podman[328275]: 2025-10-05 10:04:51.78495756 +0000 UTC m=+0.191394529 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:04:51 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:04:51 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:51.808 2 INFO neutron.agent.securitygroups_rpc [None req-48ed4e42-a053-4e1f-8ebc-0e6db992bfed b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:52 localhost openstack_network_exporter[250601]: ERROR 10:04:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:04:52 localhost openstack_network_exporter[250601]: ERROR 10:04:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:52 localhost openstack_network_exporter[250601]: ERROR 10:04:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:52 localhost openstack_network_exporter[250601]: ERROR 10:04:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:04:52 localhost openstack_network_exporter[250601]: Oct 5 06:04:52 localhost openstack_network_exporter[250601]: ERROR 10:04:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:04:52 localhost openstack_network_exporter[250601]: Oct 5 06:04:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:52.095 272040 INFO neutron.agent.linux.ip_lib [None req-860b7245-48ff-4814-b344-20d0482f3518 - - - - - -] Device tap143227f3-99 cannot be used as it has no MAC address#033[00m Oct 5 06:04:52 localhost nova_compute[297021]: 2025-10-05 10:04:52.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:52 localhost kernel: device tap143227f3-99 entered promiscuous mode Oct 5 06:04:52 localhost NetworkManager[5981]: [1759658692.1225] manager: (tap143227f3-99): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Oct 5 06:04:52 localhost nova_compute[297021]: 2025-10-05 10:04:52.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:52 localhost ovn_controller[157794]: 2025-10-05T10:04:52Z|00191|binding|INFO|Claiming lport 143227f3-99ea-469d-a1ca-8f720fd3fbf9 for this chassis. Oct 5 06:04:52 localhost ovn_controller[157794]: 2025-10-05T10:04:52Z|00192|binding|INFO|143227f3-99ea-469d-a1ca-8f720fd3fbf9: Claiming unknown Oct 5 06:04:52 localhost systemd-udevd[328323]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:52.134 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-0318891d-786a-4d66-8c6a-d1c92f1bd551', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0318891d-786a-4d66-8c6a-d1c92f1bd551', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90871647c86c4e79966a4566276d2128', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de552ddd-181f-4fc4-9803-7fb7dcd8de0f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=143227f3-99ea-469d-a1ca-8f720fd3fbf9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:52.139 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 143227f3-99ea-469d-a1ca-8f720fd3fbf9 in datapath 0318891d-786a-4d66-8c6a-d1c92f1bd551 bound to our chassis#033[00m Oct 5 06:04:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:52.140 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0318891d-786a-4d66-8c6a-d1c92f1bd551 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:52.142 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[baa4ad38-22a3-4074-8b9a-1e04ddcb3787]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost ovn_controller[157794]: 2025-10-05T10:04:52Z|00193|binding|INFO|Setting lport 143227f3-99ea-469d-a1ca-8f720fd3fbf9 ovn-installed in OVS Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost ovn_controller[157794]: 2025-10-05T10:04:52Z|00194|binding|INFO|Setting lport 143227f3-99ea-469d-a1ca-8f720fd3fbf9 up in Southbound Oct 5 06:04:52 localhost nova_compute[297021]: 2025-10-05 10:04:52.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost journal[237931]: ethtool ioctl error on tap143227f3-99: No such device Oct 5 06:04:52 localhost nova_compute[297021]: 2025-10-05 10:04:52.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:52 localhost nova_compute[297021]: 2025-10-05 10:04:52.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:53 localhost podman[328395]: Oct 5 06:04:53 localhost podman[328395]: 2025-10-05 10:04:53.128543009 +0000 UTC m=+0.091963539 container create 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:04:53 localhost systemd[1]: Started libpod-conmon-238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b.scope. Oct 5 06:04:53 localhost systemd[1]: Started libcrun container. Oct 5 06:04:53 localhost podman[328395]: 2025-10-05 10:04:53.081957423 +0000 UTC m=+0.045378003 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25005fbbc2ae01c4bc228679556acb7b69c873daca20e69ba0d0d97bad9246b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:53 localhost podman[328395]: 2025-10-05 10:04:53.193313508 +0000 UTC m=+0.156734048 container init 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:53 localhost podman[328395]: 2025-10-05 10:04:53.206130915 +0000 UTC m=+0.169551455 container start 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:04:53 localhost dnsmasq[328413]: started, version 2.85 cachesize 150 Oct 5 06:04:53 localhost dnsmasq[328413]: DNS service limited to local subnets Oct 5 06:04:53 localhost dnsmasq[328413]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:53 localhost dnsmasq[328413]: warning: no upstream servers configured Oct 5 06:04:53 localhost dnsmasq-dhcp[328413]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:04:53 localhost dnsmasq[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/addn_hosts - 0 addresses Oct 5 06:04:53 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/host Oct 5 06:04:53 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/opts Oct 5 06:04:53 localhost nova_compute[297021]: 2025-10-05 10:04:53.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:53.414 272040 INFO neutron.agent.dhcp.agent [None req-bf36635f-158d-476c-85f5-6e6230e7ad16 - - - - - -] DHCP configuration for ports {'70263513-d71e-43c0-aa6e-7691e77d7f7c'} is completed#033[00m Oct 5 06:04:53 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:53.930 2 INFO neutron.agent.securitygroups_rpc [None req-3621eae7-9b8a-4d62-ad7a-61d9e831b33d b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:54 localhost nova_compute[297021]: 2025-10-05 10:04:54.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:55 localhost nova_compute[297021]: 2025-10-05 10:04:55.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:55.986 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:55Z, description=, device_id=35e9991f-3534-49b3-b850-f8e1ed668869, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a50e97e2-5f2d-4355-b0e5-31593ba4feda, ip_allocation=immediate, mac_address=fa:16:3e:27:96:47, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:47Z, description=, dns_domain=, id=0318891d-786a-4d66-8c6a-d1c92f1bd551, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-1498583765-network, port_security_enabled=True, project_id=90871647c86c4e79966a4566276d2128, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36125, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1201, status=ACTIVE, subnets=['dd386389-82ad-4bae-b952-227f5113fe5b'], tags=[], tenant_id=90871647c86c4e79966a4566276d2128, updated_at=2025-10-05T10:04:50Z, vlan_transparent=None, network_id=0318891d-786a-4d66-8c6a-d1c92f1bd551, port_security_enabled=False, project_id=90871647c86c4e79966a4566276d2128, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1231, status=DOWN, tags=[], tenant_id=90871647c86c4e79966a4566276d2128, updated_at=2025-10-05T10:04:55Z on network 0318891d-786a-4d66-8c6a-d1c92f1bd551#033[00m Oct 5 06:04:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:56.175 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:55Z, description=, device_id=5cbeff01-190b-40ac-a0bb-d01d09f96b2b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=79bddf36-80b9-4b9b-9c1d-717bf0f84973, ip_allocation=immediate, mac_address=fa:16:3e:14:40:45, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:42Z, description=, dns_domain=, id=f9d13ee7-8f20-4535-9510-4427059f352b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-574486544-network, port_security_enabled=True, project_id=fb627536c66b4926b63cef7ffc0d33ce, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20439, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1151, status=ACTIVE, subnets=['6befd38b-0e71-4aa1-8a7b-7f5e52202317'], tags=[], tenant_id=fb627536c66b4926b63cef7ffc0d33ce, updated_at=2025-10-05T10:04:44Z, vlan_transparent=None, network_id=f9d13ee7-8f20-4535-9510-4427059f352b, port_security_enabled=False, project_id=fb627536c66b4926b63cef7ffc0d33ce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1228, status=DOWN, tags=[], tenant_id=fb627536c66b4926b63cef7ffc0d33ce, updated_at=2025-10-05T10:04:55Z on network f9d13ee7-8f20-4535-9510-4427059f352b#033[00m Oct 5 06:04:56 localhost dnsmasq[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/addn_hosts - 1 addresses Oct 5 06:04:56 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/host Oct 5 06:04:56 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/opts Oct 5 06:04:56 localhost podman[328432]: 2025-10-05 10:04:56.199510689 +0000 UTC m=+0.062878689 container kill 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:04:56 localhost dnsmasq[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/addn_hosts - 1 addresses Oct 5 06:04:56 localhost podman[328471]: 2025-10-05 10:04:56.403504799 +0000 UTC m=+0.062721345 container kill e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 06:04:56 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/host Oct 5 06:04:56 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/opts Oct 5 06:04:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:56.499 272040 INFO neutron.agent.dhcp.agent [None req-be266408-1c3a-4574-99ed-dd206bedbbe7 - - - - - -] DHCP configuration for ports {'a50e97e2-5f2d-4355-b0e5-31593ba4feda'} is completed#033[00m Oct 5 06:04:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e109 do_prune osdmap full prune enabled Oct 5 06:04:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e110 e110: 6 total, 6 up, 6 in Oct 5 06:04:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e110: 6 total, 6 up, 6 in Oct 5 06:04:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:56.659 272040 INFO neutron.agent.dhcp.agent [None req-2a387c9d-3771-41ce-a9fb-b71df6ed6a9e - - - - - -] DHCP configuration for ports {'79bddf36-80b9-4b9b-9c1d-717bf0f84973'} is completed#033[00m Oct 5 06:04:57 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:57.139 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:55Z, description=, device_id=35e9991f-3534-49b3-b850-f8e1ed668869, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a50e97e2-5f2d-4355-b0e5-31593ba4feda, ip_allocation=immediate, mac_address=fa:16:3e:27:96:47, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:47Z, description=, dns_domain=, id=0318891d-786a-4d66-8c6a-d1c92f1bd551, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-1498583765-network, port_security_enabled=True, project_id=90871647c86c4e79966a4566276d2128, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36125, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1201, status=ACTIVE, subnets=['dd386389-82ad-4bae-b952-227f5113fe5b'], tags=[], tenant_id=90871647c86c4e79966a4566276d2128, updated_at=2025-10-05T10:04:50Z, vlan_transparent=None, network_id=0318891d-786a-4d66-8c6a-d1c92f1bd551, port_security_enabled=False, project_id=90871647c86c4e79966a4566276d2128, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1231, status=DOWN, tags=[], tenant_id=90871647c86c4e79966a4566276d2128, updated_at=2025-10-05T10:04:55Z on network 0318891d-786a-4d66-8c6a-d1c92f1bd551#033[00m Oct 5 06:04:57 localhost dnsmasq[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/addn_hosts - 1 addresses Oct 5 06:04:57 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/host Oct 5 06:04:57 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/opts Oct 5 06:04:57 localhost podman[328508]: 2025-10-05 10:04:57.385266941 +0000 UTC m=+0.069270712 container kill 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:57 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:57.605 272040 INFO neutron.agent.dhcp.agent [None req-13432a52-40b3-49a8-a294-48fb4127360d - - - - - -] DHCP configuration for ports {'a50e97e2-5f2d-4355-b0e5-31593ba4feda'} is completed#033[00m Oct 5 06:04:57 localhost nova_compute[297021]: 2025-10-05 10:04:57.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:57 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:57.799 2 INFO neutron.agent.securitygroups_rpc [None req-e6701a41-37af-4a2b-ae16-76078df6bfd1 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e110 do_prune osdmap full prune enabled Oct 5 06:04:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e111 e111: 6 total, 6 up, 6 in Oct 5 06:04:57 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e111: 6 total, 6 up, 6 in Oct 5 06:04:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e111 do_prune osdmap full prune enabled Oct 5 06:04:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e112 e112: 6 total, 6 up, 6 in Oct 5 06:04:58 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e112: 6 total, 6 up, 6 in Oct 5 06:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:04:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:59.219 272040 INFO neutron.agent.linux.ip_lib [None req-6a05d8e9-4817-4785-94f2-020e5715291a - - - - - -] Device tap128832f6-36 cannot be used as it has no MAC address#033[00m Oct 5 06:04:59 localhost nova_compute[297021]: 2025-10-05 10:04:59.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost podman[328532]: 2025-10-05 10:04:59.239098487 +0000 UTC m=+0.079608683 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:04:59 localhost nova_compute[297021]: 2025-10-05 10:04:59.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost kernel: device tap128832f6-36 entered promiscuous mode Oct 5 06:04:59 localhost NetworkManager[5981]: [1759658699.2627] manager: (tap128832f6-36): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Oct 5 06:04:59 localhost nova_compute[297021]: 2025-10-05 10:04:59.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost ovn_controller[157794]: 2025-10-05T10:04:59Z|00195|binding|INFO|Claiming lport 128832f6-3659-42ad-a233-0994c14b6966 for this chassis. Oct 5 06:04:59 localhost ovn_controller[157794]: 2025-10-05T10:04:59Z|00196|binding|INFO|128832f6-3659-42ad-a233-0994c14b6966: Claiming unknown Oct 5 06:04:59 localhost systemd-udevd[328558]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:59 localhost podman[328532]: 2025-10-05 10:04:59.275819085 +0000 UTC m=+0.116329261 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:04:59 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:59.298 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-320e0419-3275-4cb6-bfa3-d8aacaf70469', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-320e0419-3275-4cb6-bfa3-d8aacaf70469', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9abbdec8-e161-4fbe-a1ad-8a51ccc81dbf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=128832f6-3659-42ad-a233-0994c14b6966) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:59.300 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 128832f6-3659-42ad-a233-0994c14b6966 in datapath 320e0419-3275-4cb6-bfa3-d8aacaf70469 bound to our chassis#033[00m Oct 5 06:04:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:59.301 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 320e0419-3275-4cb6-bfa3-d8aacaf70469 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:59 localhost ovn_controller[157794]: 2025-10-05T10:04:59Z|00197|binding|INFO|Setting lport 128832f6-3659-42ad-a233-0994c14b6966 ovn-installed in OVS Oct 5 06:04:59 localhost ovn_controller[157794]: 2025-10-05T10:04:59Z|00198|binding|INFO|Setting lport 128832f6-3659-42ad-a233-0994c14b6966 up in Southbound Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:04:59.302 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[56f83e17-f678-46a8-81db-3adf4ea86506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:59 localhost nova_compute[297021]: 2025-10-05 10:04:59.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost journal[237931]: ethtool ioctl error on tap128832f6-36: No such device Oct 5 06:04:59 localhost nova_compute[297021]: 2025-10-05 10:04:59.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost nova_compute[297021]: 2025-10-05 10:04:59.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost neutron_sriov_agent[264984]: 2025-10-05 10:04:59.410 2 INFO neutron.agent.securitygroups_rpc [None req-311e8aa1-b786-4d5b-a1f1-c8361acde964 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:04:59.486 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:55Z, description=, device_id=5cbeff01-190b-40ac-a0bb-d01d09f96b2b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=79bddf36-80b9-4b9b-9c1d-717bf0f84973, ip_allocation=immediate, mac_address=fa:16:3e:14:40:45, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:42Z, description=, dns_domain=, id=f9d13ee7-8f20-4535-9510-4427059f352b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-574486544-network, port_security_enabled=True, project_id=fb627536c66b4926b63cef7ffc0d33ce, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20439, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1151, status=ACTIVE, subnets=['6befd38b-0e71-4aa1-8a7b-7f5e52202317'], tags=[], tenant_id=fb627536c66b4926b63cef7ffc0d33ce, updated_at=2025-10-05T10:04:44Z, vlan_transparent=None, network_id=f9d13ee7-8f20-4535-9510-4427059f352b, port_security_enabled=False, project_id=fb627536c66b4926b63cef7ffc0d33ce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1228, status=DOWN, tags=[], tenant_id=fb627536c66b4926b63cef7ffc0d33ce, updated_at=2025-10-05T10:04:55Z on network f9d13ee7-8f20-4535-9510-4427059f352b#033[00m Oct 5 06:04:59 localhost podman[328616]: 2025-10-05 10:04:59.742416947 +0000 UTC m=+0.060080513 container kill e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:04:59 localhost dnsmasq[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/addn_hosts - 1 addresses Oct 5 06:04:59 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/host Oct 5 06:04:59 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/opts Oct 5 06:04:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e112 do_prune osdmap full prune enabled Oct 5 06:04:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e113 e113: 6 total, 6 up, 6 in Oct 5 06:04:59 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e113: 6 total, 6 up, 6 in Oct 5 06:05:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:00.093 272040 INFO neutron.agent.dhcp.agent [None req-f2355fa1-0c3d-493b-bb76-9d93b8edd048 - - - - - -] DHCP configuration for ports {'79bddf36-80b9-4b9b-9c1d-717bf0f84973'} is completed#033[00m Oct 5 06:05:00 localhost podman[328669]: Oct 5 06:05:00 localhost podman[328669]: 2025-10-05 10:05:00.149497652 +0000 UTC m=+0.087358434 container create 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:05:00 localhost systemd[1]: Started libpod-conmon-34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b.scope. Oct 5 06:05:00 localhost podman[328669]: 2025-10-05 10:05:00.110747299 +0000 UTC m=+0.048608121 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:00 localhost systemd[1]: tmp-crun.qbYId0.mount: Deactivated successfully. Oct 5 06:05:00 localhost systemd[1]: Started libcrun container. Oct 5 06:05:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb46b85e46266a996795b40a08d0434026e585b715e640a7b1ef2381cdc7cc0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:00 localhost podman[328669]: 2025-10-05 10:05:00.25251853 +0000 UTC m=+0.190379312 container init 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:05:00 localhost podman[328669]: 2025-10-05 10:05:00.261365 +0000 UTC m=+0.199225782 container start 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:05:00 localhost dnsmasq[328687]: started, version 2.85 cachesize 150 Oct 5 06:05:00 localhost dnsmasq[328687]: DNS service limited to local subnets Oct 5 06:05:00 localhost dnsmasq[328687]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:00 localhost dnsmasq[328687]: warning: no upstream servers configured Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:00 localhost dnsmasq[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/addn_hosts - 0 addresses Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/host Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/opts Oct 5 06:05:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:00.321 272040 INFO neutron.agent.dhcp.agent [None req-6a05d8e9-4817-4785-94f2-020e5715291a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:59Z, description=, device_id=5cf66a08-ec7f-497b-ab97-d033f2d07e1b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=98884036-8140-4e25-86c8-d2bfed340dc3, ip_allocation=immediate, mac_address=fa:16:3e:96:63:c3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:55Z, description=, dns_domain=, id=320e0419-3275-4cb6-bfa3-d8aacaf70469, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1208930614, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21571, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['7bbae3e3-631b-41fd-940b-a2be7c9d00a6'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:58Z, vlan_transparent=None, network_id=320e0419-3275-4cb6-bfa3-d8aacaf70469, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1242, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:59Z on network 320e0419-3275-4cb6-bfa3-d8aacaf70469#033[00m Oct 5 06:05:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:00.431 272040 INFO neutron.agent.dhcp.agent [None req-99e8d9a1-936e-4a8b-a98b-aac93ce83252 - - - - - -] DHCP configuration for ports {'74c7cdb2-ec4a-4827-9ee6-5ff505b4496b'} is completed#033[00m Oct 5 06:05:00 localhost dnsmasq[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/addn_hosts - 1 addresses Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/host Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/opts Oct 5 06:05:00 localhost podman[328706]: 2025-10-05 10:05:00.513863137 +0000 UTC m=+0.059269851 container kill 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:05:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8804 writes, 36K keys, 8804 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8804 writes, 2210 syncs, 3.98 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3478 writes, 12K keys, 3478 commit groups, 1.0 writes per commit group, ingest: 12.77 MB, 0.02 MB/s#012Interval WAL: 3478 writes, 1461 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:05:00 localhost systemd-journald[47722]: Data hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Oct 5 06:05:00 localhost systemd-journald[47722]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 06:05:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 06:05:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 06:05:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:00.669 272040 INFO neutron.agent.dhcp.agent [None req-6a05d8e9-4817-4785-94f2-020e5715291a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:59Z, description=, device_id=5cf66a08-ec7f-497b-ab97-d033f2d07e1b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=98884036-8140-4e25-86c8-d2bfed340dc3, ip_allocation=immediate, mac_address=fa:16:3e:96:63:c3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:55Z, description=, dns_domain=, id=320e0419-3275-4cb6-bfa3-d8aacaf70469, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1208930614, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21571, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1229, status=ACTIVE, subnets=['7bbae3e3-631b-41fd-940b-a2be7c9d00a6'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:58Z, vlan_transparent=None, network_id=320e0419-3275-4cb6-bfa3-d8aacaf70469, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1242, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:04:59Z on network 320e0419-3275-4cb6-bfa3-d8aacaf70469#033[00m Oct 5 06:05:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:00.782 272040 INFO neutron.agent.dhcp.agent [None req-c18fd507-351b-43e4-83c5-b76174c9513d - - - - - -] DHCP configuration for ports {'98884036-8140-4e25-86c8-d2bfed340dc3'} is completed#033[00m Oct 5 06:05:00 localhost dnsmasq[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/addn_hosts - 1 addresses Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/host Oct 5 06:05:00 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/opts Oct 5 06:05:00 localhost podman[328746]: 2025-10-05 10:05:00.863124972 +0000 UTC m=+0.058371096 container kill 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:05:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e113 do_prune osdmap full prune enabled Oct 5 06:05:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e114 e114: 6 total, 6 up, 6 in Oct 5 06:05:00 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e114: 6 total, 6 up, 6 in Oct 5 06:05:01 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:01.122 272040 INFO neutron.agent.dhcp.agent [None req-f06ca303-dcbe-44fb-b58e-cd8cfa5ed46e - - - - - -] DHCP configuration for ports {'98884036-8140-4e25-86c8-d2bfed340dc3'} is completed#033[00m Oct 5 06:05:01 localhost dnsmasq[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/addn_hosts - 0 addresses Oct 5 06:05:01 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/host Oct 5 06:05:01 localhost podman[328782]: 2025-10-05 10:05:01.403997061 +0000 UTC m=+0.065942022 container kill 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:01 localhost dnsmasq-dhcp[328413]: read /var/lib/neutron/dhcp/0318891d-786a-4d66-8c6a-d1c92f1bd551/opts Oct 5 06:05:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:01 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:01.786 2 INFO neutron.agent.securitygroups_rpc [None req-3404f74c-a45d-4b5e-8fd8-426d7f6d729d b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:01 localhost ovn_controller[157794]: 2025-10-05T10:05:01Z|00199|binding|INFO|Releasing lport 143227f3-99ea-469d-a1ca-8f720fd3fbf9 from this chassis (sb_readonly=0) Oct 5 06:05:01 localhost kernel: device tap143227f3-99 left promiscuous mode Oct 5 06:05:01 localhost ovn_controller[157794]: 2025-10-05T10:05:01Z|00200|binding|INFO|Setting lport 143227f3-99ea-469d-a1ca-8f720fd3fbf9 down in Southbound Oct 5 06:05:01 localhost nova_compute[297021]: 2025-10-05 10:05:01.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:01.824 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-0318891d-786a-4d66-8c6a-d1c92f1bd551', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0318891d-786a-4d66-8c6a-d1c92f1bd551', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90871647c86c4e79966a4566276d2128', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de552ddd-181f-4fc4-9803-7fb7dcd8de0f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=143227f3-99ea-469d-a1ca-8f720fd3fbf9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:01.826 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 143227f3-99ea-469d-a1ca-8f720fd3fbf9 in datapath 0318891d-786a-4d66-8c6a-d1c92f1bd551 unbound from our chassis#033[00m Oct 5 06:05:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:01.828 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0318891d-786a-4d66-8c6a-d1c92f1bd551, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:01.829 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[aed286ab-d824-41e6-8e96-2a3dbba39af2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:01 localhost nova_compute[297021]: 2025-10-05 10:05:01.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e114 do_prune osdmap full prune enabled Oct 5 06:05:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e115 e115: 6 total, 6 up, 6 in Oct 5 06:05:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e115: 6 total, 6 up, 6 in Oct 5 06:05:02 localhost dnsmasq[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/addn_hosts - 0 addresses Oct 5 06:05:02 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/host Oct 5 06:05:02 localhost dnsmasq-dhcp[328687]: read /var/lib/neutron/dhcp/320e0419-3275-4cb6-bfa3-d8aacaf70469/opts Oct 5 06:05:02 localhost podman[328824]: 2025-10-05 10:05:02.157660549 +0000 UTC m=+0.060134855 container kill 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:05:02 localhost podman[328837]: 2025-10-05 10:05:02.280617718 +0000 UTC m=+0.092580995 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:05:02 localhost ovn_controller[157794]: 2025-10-05T10:05:02Z|00201|binding|INFO|Releasing lport 128832f6-3659-42ad-a233-0994c14b6966 from this chassis (sb_readonly=0) Oct 5 06:05:02 localhost kernel: device tap128832f6-36 left promiscuous mode Oct 5 06:05:02 localhost nova_compute[297021]: 2025-10-05 10:05:02.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:02 localhost ovn_controller[157794]: 2025-10-05T10:05:02Z|00202|binding|INFO|Setting lport 128832f6-3659-42ad-a233-0994c14b6966 down in Southbound Oct 5 06:05:02 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:02.341 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-320e0419-3275-4cb6-bfa3-d8aacaf70469', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-320e0419-3275-4cb6-bfa3-d8aacaf70469', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9abbdec8-e161-4fbe-a1ad-8a51ccc81dbf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=128832f6-3659-42ad-a233-0994c14b6966) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:02 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:02.343 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 128832f6-3659-42ad-a233-0994c14b6966 in datapath 320e0419-3275-4cb6-bfa3-d8aacaf70469 unbound from our chassis#033[00m Oct 5 06:05:02 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:02.345 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 320e0419-3275-4cb6-bfa3-d8aacaf70469 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:02 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:02.348 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[74b19e3d-fa68-4de8-84fc-20ba016b1bd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:02 localhost podman[328837]: 2025-10-05 10:05:02.352949472 +0000 UTC m=+0.164912749 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller) Oct 5 06:05:02 localhost nova_compute[297021]: 2025-10-05 10:05:02.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:02 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:05:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e115 do_prune osdmap full prune enabled Oct 5 06:05:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e116 e116: 6 total, 6 up, 6 in Oct 5 06:05:02 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e116: 6 total, 6 up, 6 in Oct 5 06:05:04 localhost nova_compute[297021]: 2025-10-05 10:05:04.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:04 localhost systemd[1]: tmp-crun.X68GKA.mount: Deactivated successfully. Oct 5 06:05:04 localhost dnsmasq[328687]: exiting on receipt of SIGTERM Oct 5 06:05:04 localhost podman[328890]: 2025-10-05 10:05:04.420934264 +0000 UTC m=+0.073837467 container kill 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:05:04 localhost systemd[1]: libpod-34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b.scope: Deactivated successfully. Oct 5 06:05:04 localhost podman[328905]: 2025-10-05 10:05:04.501634865 +0000 UTC m=+0.057266626 container died 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:04 localhost systemd[1]: tmp-crun.fhHbLr.mount: Deactivated successfully. Oct 5 06:05:04 localhost podman[328905]: 2025-10-05 10:05:04.555609461 +0000 UTC m=+0.111241172 container remove 34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-320e0419-3275-4cb6-bfa3-d8aacaf70469, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:05:04 localhost systemd[1]: libpod-conmon-34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b.scope: Deactivated successfully. Oct 5 06:05:04 localhost ovn_controller[157794]: 2025-10-05T10:05:04Z|00203|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:04 localhost nova_compute[297021]: 2025-10-05 10:05:04.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:04 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:04.783 272040 INFO neutron.agent.dhcp.agent [None req-3ee6bf11-a399-4421-9d59-650cf687704e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:04 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:04.990 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:04 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e116 do_prune osdmap full prune enabled Oct 5 06:05:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e117 e117: 6 total, 6 up, 6 in Oct 5 06:05:05 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e117: 6 total, 6 up, 6 in Oct 5 06:05:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:05:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8167 writes, 33K keys, 8167 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8167 writes, 1890 syncs, 4.32 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2574 writes, 9357 keys, 2574 commit groups, 1.0 writes per commit group, ingest: 11.72 MB, 0.02 MB/s#012Interval WAL: 2574 writes, 1096 syncs, 2.35 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:05:05 localhost systemd[1]: var-lib-containers-storage-overlay-4fb46b85e46266a996795b40a08d0434026e585b715e640a7b1ef2381cdc7cc0-merged.mount: Deactivated successfully. Oct 5 06:05:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34704df1f3bced176ff8b1c6137ba46e85b8ef95a95e639529e7e2fd2635ab3b-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:05 localhost systemd[1]: run-netns-qdhcp\x2d320e0419\x2d3275\x2d4cb6\x2dbfa3\x2dd8aacaf70469.mount: Deactivated successfully. Oct 5 06:05:05 localhost podman[328931]: 2025-10-05 10:05:05.429416712 +0000 UTC m=+0.085708299 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:05 localhost podman[328931]: 2025-10-05 10:05:05.444844361 +0000 UTC m=+0.101135988 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3) Oct 5 06:05:05 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:05:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:05.519 2 INFO neutron.agent.securitygroups_rpc [None req-96ba4175-465c-43e2-a318-1d55fb6b05eb b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:05 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:05.789 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:05 localhost podman[328969]: 2025-10-05 10:05:05.900238969 +0000 UTC m=+0.063455405 container kill e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:05 localhost dnsmasq[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/addn_hosts - 0 addresses Oct 5 06:05:05 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/host Oct 5 06:05:05 localhost dnsmasq-dhcp[328250]: read /var/lib/neutron/dhcp/f9d13ee7-8f20-4535-9510-4427059f352b/opts Oct 5 06:05:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e117 do_prune osdmap full prune enabled Oct 5 06:05:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e118 e118: 6 total, 6 up, 6 in Oct 5 06:05:06 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e118: 6 total, 6 up, 6 in Oct 5 06:05:06 localhost ovn_controller[157794]: 2025-10-05T10:05:06Z|00204|binding|INFO|Releasing lport da23fd08-0934-4105-a822-89da1affa672 from this chassis (sb_readonly=0) Oct 5 06:05:06 localhost kernel: device tapda23fd08-09 left promiscuous mode Oct 5 06:05:06 localhost ovn_controller[157794]: 2025-10-05T10:05:06Z|00205|binding|INFO|Setting lport da23fd08-0934-4105-a822-89da1affa672 down in Southbound Oct 5 06:05:06 localhost nova_compute[297021]: 2025-10-05 10:05:06.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:06.101 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-f9d13ee7-8f20-4535-9510-4427059f352b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9d13ee7-8f20-4535-9510-4427059f352b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb627536c66b4926b63cef7ffc0d33ce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baea6acf-d1ce-48d4-8fd4-a7d550eb6263, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=da23fd08-0934-4105-a822-89da1affa672) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:06.103 163434 INFO neutron.agent.ovn.metadata.agent [-] Port da23fd08-0934-4105-a822-89da1affa672 in datapath f9d13ee7-8f20-4535-9510-4427059f352b unbound from our chassis#033[00m Oct 5 06:05:06 localhost nova_compute[297021]: 2025-10-05 10:05:06.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:06.105 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9d13ee7-8f20-4535-9510-4427059f352b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:06.106 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1a73b7-dba8-418e-865e-2f758424ceaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:06 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:06.899 2 INFO neutron.agent.securitygroups_rpc [None req-c6bd9fde-6da0-4c32-a4a7-f28125de346f b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:07 localhost ovn_controller[157794]: 2025-10-05T10:05:07Z|00206|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:07 localhost nova_compute[297021]: 2025-10-05 10:05:07.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e118 do_prune osdmap full prune enabled Oct 5 06:05:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e119 e119: 6 total, 6 up, 6 in Oct 5 06:05:07 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e119: 6 total, 6 up, 6 in Oct 5 06:05:08 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:08.238 2 INFO neutron.agent.securitygroups_rpc [None req-8902c5d7-9a45-4249-96d1-8b434e4645ef b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:08 localhost dnsmasq[328413]: exiting on receipt of SIGTERM Oct 5 06:05:08 localhost podman[329009]: 2025-10-05 10:05:08.462835541 +0000 UTC m=+0.071817251 container kill 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:05:08 localhost systemd[1]: libpod-238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b.scope: Deactivated successfully. Oct 5 06:05:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:05:08 localhost podman[329025]: 2025-10-05 10:05:08.549509005 +0000 UTC m=+0.063857025 container died 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:08 localhost systemd[1]: var-lib-containers-storage-overlay-25005fbbc2ae01c4bc228679556acb7b69c873daca20e69ba0d0d97bad9246b0-merged.mount: Deactivated successfully. Oct 5 06:05:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e119 do_prune osdmap full prune enabled Oct 5 06:05:08 localhost podman[329025]: 2025-10-05 10:05:08.597619562 +0000 UTC m=+0.111967532 container remove 238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0318891d-786a-4d66-8c6a-d1c92f1bd551, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e120 e120: 6 total, 6 up, 6 in Oct 5 06:05:08 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e120: 6 total, 6 up, 6 in Oct 5 06:05:08 localhost systemd[1]: libpod-conmon-238a0f5974acc1857fffd3a525f67d580616677bc3754befcc84a3fdd348018b.scope: Deactivated successfully. Oct 5 06:05:08 localhost podman[329031]: 2025-10-05 10:05:08.650119707 +0000 UTC m=+0.150149788 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:05:08 localhost podman[329031]: 2025-10-05 10:05:08.666380969 +0000 UTC m=+0.166411040 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vcs-type=git, distribution-scope=public) Oct 5 06:05:08 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:05:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:08.986 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:08 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:08.987 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:05:08 localhost nova_compute[297021]: 2025-10-05 10:05:08.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:09.046 272040 INFO neutron.agent.dhcp.agent [None req-daaa979b-99b8-424c-9016-0d6dd7e67051 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:09.048 272040 INFO neutron.agent.dhcp.agent [None req-daaa979b-99b8-424c-9016-0d6dd7e67051 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:09 localhost nova_compute[297021]: 2025-10-05 10:05:09.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:09 localhost ovn_controller[157794]: 2025-10-05T10:05:09Z|00207|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:09 localhost nova_compute[297021]: 2025-10-05 10:05:09.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:09 localhost nova_compute[297021]: 2025-10-05 10:05:09.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:09 localhost systemd[1]: run-netns-qdhcp\x2d0318891d\x2d786a\x2d4d66\x2d8c6a\x2dd1c92f1bd551.mount: Deactivated successfully. Oct 5 06:05:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e120 do_prune osdmap full prune enabled Oct 5 06:05:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e121 e121: 6 total, 6 up, 6 in Oct 5 06:05:09 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e121: 6 total, 6 up, 6 in Oct 5 06:05:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:09.637 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:10 localhost systemd[1]: tmp-crun.eZ0xfw.mount: Deactivated successfully. Oct 5 06:05:10 localhost dnsmasq[328250]: exiting on receipt of SIGTERM Oct 5 06:05:10 localhost podman[329088]: 2025-10-05 10:05:10.197311325 +0000 UTC m=+0.073435586 container kill e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:10 localhost systemd[1]: libpod-e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9.scope: Deactivated successfully. Oct 5 06:05:10 localhost podman[329103]: 2025-10-05 10:05:10.28000908 +0000 UTC m=+0.058090509 container died e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 06:05:10 localhost podman[329103]: 2025-10-05 10:05:10.31900436 +0000 UTC m=+0.097085728 container remove e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9d13ee7-8f20-4535-9510-4427059f352b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:05:10 localhost systemd[1]: libpod-conmon-e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9.scope: Deactivated successfully. Oct 5 06:05:10 localhost nova_compute[297021]: 2025-10-05 10:05:10.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:10 localhost nova_compute[297021]: 2025-10-05 10:05:10.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:05:10 localhost systemd[1]: var-lib-containers-storage-overlay-460b42c23c4aa4eead059ae51ae8150ec908d43e2af89d55567fd9c124f224f1-merged.mount: Deactivated successfully. Oct 5 06:05:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e88d26e0da10fae19882563ddb0dabc2cbcfeef4cf2703c0d653aea3bb5038a9-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:10 localhost systemd[1]: run-netns-qdhcp\x2df9d13ee7\x2d8f20\x2d4535\x2d9510\x2d4427059f352b.mount: Deactivated successfully. Oct 5 06:05:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:10.540 272040 INFO neutron.agent.dhcp.agent [None req-719b85d4-5b20-4caa-9423-336777ef94d9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:10.541 272040 INFO neutron.agent.dhcp.agent [None req-719b85d4-5b20-4caa-9423-336777ef94d9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:10 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e121 do_prune osdmap full prune enabled Oct 5 06:05:10 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e122 e122: 6 total, 6 up, 6 in Oct 5 06:05:10 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e122: 6 total, 6 up, 6 in Oct 5 06:05:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:10.789 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e122 do_prune osdmap full prune enabled Oct 5 06:05:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e123 e123: 6 total, 6 up, 6 in Oct 5 06:05:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e123: 6 total, 6 up, 6 in Oct 5 06:05:11 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:11.990 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:05:12 localhost nova_compute[297021]: 2025-10-05 10:05:12.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:05:12 localhost podman[329127]: 2025-10-05 10:05:12.672536916 +0000 UTC m=+0.080069245 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:05:12 localhost podman[329127]: 2025-10-05 10:05:12.681474748 +0000 UTC m=+0.089007087 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:05:12 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:05:13 localhost nova_compute[297021]: 2025-10-05 10:05:13.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:14 localhost nova_compute[297021]: 2025-10-05 10:05:14.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:14 localhost nova_compute[297021]: 2025-10-05 10:05:14.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:16 localhost nova_compute[297021]: 2025-10-05 10:05:16.424 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e123 do_prune osdmap full prune enabled Oct 5 06:05:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 e124: 6 total, 6 up, 6 in Oct 5 06:05:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e124: 6 total, 6 up, 6 in Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.446 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.446 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.467 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.468 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.468 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.468 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.469 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:05:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:05:17 localhost podman[329150]: 2025-10-05 10:05:17.675757381 +0000 UTC m=+0.084458304 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:05:17 localhost podman[329150]: 2025-10-05 10:05:17.687923741 +0000 UTC m=+0.096624654 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:05:17 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:05:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:05:17 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2431877034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:05:17 localhost nova_compute[297021]: 2025-10-05 10:05:17.967 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.112 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.112 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.333 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.334 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11205MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.334 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.335 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.489 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.490 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.490 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:05:18 localhost nova_compute[297021]: 2025-10-05 10:05:18.536 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:05:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:05:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/532833000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:05:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:05:18 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/532833000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:05:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:05:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/297054183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.038 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.044 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.063 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.066 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.066 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:19 localhost nova_compute[297021]: 2025-10-05 10:05:19.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:05:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:20.467 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:05:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:20.468 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:05:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:20.469 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.042 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.042 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.043 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.150 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.151 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.152 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.152 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:05:21 localhost podman[248506]: time="2025-10-05T10:05:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:05:21 localhost podman[248506]: @ - - [05/Oct/2025:10:05:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:05:21 localhost podman[248506]: @ - - [05/Oct/2025:10:05:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19380 "" "Go-http-client/1.1" Oct 5 06:05:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:21 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:21.764 272040 INFO neutron.agent.linux.ip_lib [None req-0296beaa-2606-4420-823c-91c164e7ca25 - - - - - -] Device tap6dbb25f5-1a cannot be used as it has no MAC address#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:21 localhost kernel: device tap6dbb25f5-1a entered promiscuous mode Oct 5 06:05:21 localhost ovn_controller[157794]: 2025-10-05T10:05:21Z|00208|binding|INFO|Claiming lport 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 for this chassis. Oct 5 06:05:21 localhost NetworkManager[5981]: [1759658721.8297] manager: (tap6dbb25f5-1a): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:21 localhost ovn_controller[157794]: 2025-10-05T10:05:21Z|00209|binding|INFO|6dbb25f5-1aed-4015-869f-071fe3aa6bb2: Claiming unknown Oct 5 06:05:21 localhost systemd-udevd[329224]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:21.840 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-b1923011-df52-4ef0-b36e-17230cd3114a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1923011-df52-4ef0-b36e-17230cd3114a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27a7569c-6412-44f6-bc67-c992a920641c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6dbb25f5-1aed-4015-869f-071fe3aa6bb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:21.843 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 in datapath b1923011-df52-4ef0-b36e-17230cd3114a bound to our chassis#033[00m Oct 5 06:05:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:21.846 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b1923011-df52-4ef0-b36e-17230cd3114a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:21.847 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d313ad20-4436-476a-aac6-2a5e6d0d740d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:05:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost ovn_controller[157794]: 2025-10-05T10:05:21Z|00210|binding|INFO|Setting lport 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 ovn-installed in OVS Oct 5 06:05:21 localhost ovn_controller[157794]: 2025-10-05T10:05:21Z|00211|binding|INFO|Setting lport 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 up in Southbound Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost journal[237931]: ethtool ioctl error on tap6dbb25f5-1a: No such device Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.934 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.958 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:05:21 localhost nova_compute[297021]: 2025-10-05 10:05:21.958 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:05:21 localhost systemd[1]: tmp-crun.Z8tUIh.mount: Deactivated successfully. Oct 5 06:05:21 localhost podman[329230]: 2025-10-05 10:05:21.982486992 +0000 UTC m=+0.101761375 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:05:22 localhost openstack_network_exporter[250601]: ERROR 10:05:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:22 localhost openstack_network_exporter[250601]: ERROR 10:05:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:22 localhost openstack_network_exporter[250601]: ERROR 10:05:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:05:22 localhost openstack_network_exporter[250601]: ERROR 10:05:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:05:22 localhost openstack_network_exporter[250601]: Oct 5 06:05:22 localhost openstack_network_exporter[250601]: ERROR 10:05:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:05:22 localhost openstack_network_exporter[250601]: Oct 5 06:05:22 localhost podman[329232]: 2025-10-05 10:05:22.037568928 +0000 UTC m=+0.151938947 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:05:22 localhost podman[329230]: 2025-10-05 10:05:22.046752957 +0000 UTC m=+0.166027330 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:05:22 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:05:22 localhost podman[329232]: 2025-10-05 10:05:22.079066315 +0000 UTC m=+0.193436364 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001) Oct 5 06:05:22 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:05:22 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:22.670 2 INFO neutron.agent.securitygroups_rpc [None req-93fe7840-dc69-45ff-b50f-36a4dfb68a10 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:22 localhost podman[329331]: Oct 5 06:05:22 localhost podman[329331]: 2025-10-05 10:05:22.738568215 +0000 UTC m=+0.091668540 container create d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 06:05:22 localhost systemd[1]: Started libpod-conmon-d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a.scope. Oct 5 06:05:22 localhost podman[329331]: 2025-10-05 10:05:22.693991754 +0000 UTC m=+0.047092119 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:22 localhost systemd[1]: Started libcrun container. Oct 5 06:05:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c92a3baafd69abf35d073d93a6068ffad8b6d773e11b55dc2025f2c2c04540d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:22 localhost podman[329331]: 2025-10-05 10:05:22.82378511 +0000 UTC m=+0.176885445 container init d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:22 localhost podman[329331]: 2025-10-05 10:05:22.832559758 +0000 UTC m=+0.185660083 container start d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:22 localhost dnsmasq[329349]: started, version 2.85 cachesize 150 Oct 5 06:05:22 localhost dnsmasq[329349]: DNS service limited to local subnets Oct 5 06:05:22 localhost dnsmasq[329349]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:22 localhost dnsmasq[329349]: warning: no upstream servers configured Oct 5 06:05:22 localhost dnsmasq-dhcp[329349]: DHCPv6, static leases only on 2001:db8:3::, lease time 1d Oct 5 06:05:22 localhost dnsmasq[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/addn_hosts - 0 addresses Oct 5 06:05:22 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/host Oct 5 06:05:22 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/opts Oct 5 06:05:22 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:22.891 272040 INFO neutron.agent.dhcp.agent [None req-0296beaa-2606-4420-823c-91c164e7ca25 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:21Z, description=, device_id=94485d07-9727-4c93-a258-4dd243f7b5fb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b18212bc-f2ca-4131-b64f-68fc6240b1dc, ip_allocation=immediate, mac_address=fa:16:3e:f5:46:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:20Z, description=, dns_domain=, id=b1923011-df52-4ef0-b36e-17230cd3114a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2140761538, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56944, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1327, status=ACTIVE, subnets=['1fec3e93-dfc9-4a53-99e3-b0957694fa02'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:20Z, vlan_transparent=None, network_id=b1923011-df52-4ef0-b36e-17230cd3114a, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1336, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:21Z on network b1923011-df52-4ef0-b36e-17230cd3114a#033[00m Oct 5 06:05:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:23.049 272040 INFO neutron.agent.dhcp.agent [None req-ea4c082f-7f9a-495a-9c51-700a673c05dc - - - - - -] DHCP configuration for ports {'df63be22-db0d-459f-b8eb-f3f062833fab'} is completed#033[00m Oct 5 06:05:23 localhost dnsmasq[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/addn_hosts - 1 addresses Oct 5 06:05:23 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/host Oct 5 06:05:23 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/opts Oct 5 06:05:23 localhost podman[329368]: 2025-10-05 10:05:23.086134754 +0000 UTC m=+0.063772842 container kill d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:05:23 localhost nova_compute[297021]: 2025-10-05 10:05:23.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:23.226 272040 INFO neutron.agent.dhcp.agent [None req-0296beaa-2606-4420-823c-91c164e7ca25 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:21Z, description=, device_id=94485d07-9727-4c93-a258-4dd243f7b5fb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b18212bc-f2ca-4131-b64f-68fc6240b1dc, ip_allocation=immediate, mac_address=fa:16:3e:f5:46:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:20Z, description=, dns_domain=, id=b1923011-df52-4ef0-b36e-17230cd3114a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2140761538, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56944, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1327, status=ACTIVE, subnets=['1fec3e93-dfc9-4a53-99e3-b0957694fa02'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:20Z, vlan_transparent=None, network_id=b1923011-df52-4ef0-b36e-17230cd3114a, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1336, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:21Z on network b1923011-df52-4ef0-b36e-17230cd3114a#033[00m Oct 5 06:05:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:23.310 272040 INFO neutron.agent.dhcp.agent [None req-ac79236d-e78b-420c-80d5-9b896e10751c - - - - - -] DHCP configuration for ports {'b18212bc-f2ca-4131-b64f-68fc6240b1dc'} is completed#033[00m Oct 5 06:05:23 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:23.345 2 INFO neutron.agent.securitygroups_rpc [None req-1d706833-c05a-42ba-83e6-877058a56cc7 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:23 localhost dnsmasq[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/addn_hosts - 1 addresses Oct 5 06:05:23 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/host Oct 5 06:05:23 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/opts Oct 5 06:05:23 localhost podman[329406]: 2025-10-05 10:05:23.408985663 +0000 UTC m=+0.059053275 container kill d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:05:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:23.656 272040 INFO neutron.agent.dhcp.agent [None req-502f43b6-6ebc-4815-accf-c26141ad2111 - - - - - -] DHCP configuration for ports {'b18212bc-f2ca-4131-b64f-68fc6240b1dc'} is completed#033[00m Oct 5 06:05:24 localhost nova_compute[297021]: 2025-10-05 10:05:24.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:05:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:05:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:26 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:05:26 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:05:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:05:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:05:26 localhost dnsmasq[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/addn_hosts - 0 addresses Oct 5 06:05:26 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/host Oct 5 06:05:26 localhost dnsmasq-dhcp[329349]: read /var/lib/neutron/dhcp/b1923011-df52-4ef0-b36e-17230cd3114a/opts Oct 5 06:05:26 localhost podman[329527]: 2025-10-05 10:05:26.859553101 +0000 UTC m=+0.094366233 container kill d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:05:27 localhost ovn_controller[157794]: 2025-10-05T10:05:27Z|00212|binding|INFO|Releasing lport 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 from this chassis (sb_readonly=0) Oct 5 06:05:27 localhost kernel: device tap6dbb25f5-1a left promiscuous mode Oct 5 06:05:27 localhost ovn_controller[157794]: 2025-10-05T10:05:27Z|00213|binding|INFO|Setting lport 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 down in Southbound Oct 5 06:05:27 localhost nova_compute[297021]: 2025-10-05 10:05:27.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:27.068 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-b1923011-df52-4ef0-b36e-17230cd3114a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1923011-df52-4ef0-b36e-17230cd3114a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27a7569c-6412-44f6-bc67-c992a920641c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6dbb25f5-1aed-4015-869f-071fe3aa6bb2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:27.070 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 6dbb25f5-1aed-4015-869f-071fe3aa6bb2 in datapath b1923011-df52-4ef0-b36e-17230cd3114a unbound from our chassis#033[00m Oct 5 06:05:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:27.072 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b1923011-df52-4ef0-b36e-17230cd3114a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:27.073 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[23b6a0ca-713f-4ace-8753-4309e77696f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:27 localhost nova_compute[297021]: 2025-10-05 10:05:27.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:27 localhost systemd[1]: tmp-crun.Lqz7Ho.mount: Deactivated successfully. Oct 5 06:05:27 localhost podman[329568]: 2025-10-05 10:05:27.612662784 +0000 UTC m=+0.069853848 container kill d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:05:27 localhost dnsmasq[329349]: exiting on receipt of SIGTERM Oct 5 06:05:27 localhost systemd[1]: libpod-d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a.scope: Deactivated successfully. Oct 5 06:05:27 localhost podman[329580]: 2025-10-05 10:05:27.681671349 +0000 UTC m=+0.054288325 container died d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:27 localhost podman[329580]: 2025-10-05 10:05:27.712104755 +0000 UTC m=+0.084721681 container cleanup d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:05:27 localhost systemd[1]: libpod-conmon-d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a.scope: Deactivated successfully. Oct 5 06:05:27 localhost podman[329582]: 2025-10-05 10:05:27.761233489 +0000 UTC m=+0.125010956 container remove d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b1923011-df52-4ef0-b36e-17230cd3114a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:05:27 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:27.792 272040 INFO neutron.agent.dhcp.agent [None req-c6a64351-fc18-4ea9-bfd8-cfafc26b3861 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:27 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:27.794 272040 INFO neutron.agent.dhcp.agent [None req-c6a64351-fc18-4ea9-bfd8-cfafc26b3861 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:27 localhost systemd[1]: var-lib-containers-storage-overlay-6c92a3baafd69abf35d073d93a6068ffad8b6d773e11b55dc2025f2c2c04540d-merged.mount: Deactivated successfully. Oct 5 06:05:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9b3397c6b38a4a48120c7c450c451031e4ec6b852e881a3139cfde39bfef54a-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:27 localhost systemd[1]: run-netns-qdhcp\x2db1923011\x2ddf52\x2d4ef0\x2db36e\x2d17230cd3114a.mount: Deactivated successfully. Oct 5 06:05:27 localhost ovn_controller[157794]: 2025-10-05T10:05:27Z|00214|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:27 localhost ovn_controller[157794]: 2025-10-05T10:05:27Z|00215|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:27 localhost nova_compute[297021]: 2025-10-05 10:05:27.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:28 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:28.664 2 INFO neutron.agent.securitygroups_rpc [None req-d703c398-6b6b-4943-948d-cb3bb0e9aa0d 1923ea4457da447faeaeab6caeaa2432 0e9cb8e52fb8423a938253c02c2bf4e9 - - default default] Security group member updated ['0ca2311d-6d3d-404c-89d9-0a8f70a83790']#033[00m Oct 5 06:05:29 localhost nova_compute[297021]: 2025-10-05 10:05:29.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:29 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:29.426 2 INFO neutron.agent.securitygroups_rpc [None req-8eb6dbc3-4698-4d96-8d9f-a1a17a2550fa 1923ea4457da447faeaeab6caeaa2432 0e9cb8e52fb8423a938253c02c2bf4e9 - - default default] Security group member updated ['0ca2311d-6d3d-404c-89d9-0a8f70a83790']#033[00m Oct 5 06:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:05:29 localhost systemd[1]: tmp-crun.ibUaLb.mount: Deactivated successfully. Oct 5 06:05:29 localhost podman[329609]: 2025-10-05 10:05:29.686561557 +0000 UTC m=+0.089687937 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:05:29 localhost podman[329609]: 2025-10-05 10:05:29.715651637 +0000 UTC m=+0.118778027 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:05:29 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:05:31 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:31.178 272040 INFO neutron.agent.linux.ip_lib [None req-3c96cdfc-e5d3-4860-bf1d-bc6b74b9a00d - - - - - -] Device tapbe295565-7f cannot be used as it has no MAC address#033[00m Oct 5 06:05:31 localhost nova_compute[297021]: 2025-10-05 10:05:31.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:31 localhost kernel: device tapbe295565-7f entered promiscuous mode Oct 5 06:05:31 localhost NetworkManager[5981]: [1759658731.2103] manager: (tapbe295565-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Oct 5 06:05:31 localhost ovn_controller[157794]: 2025-10-05T10:05:31Z|00216|binding|INFO|Claiming lport be295565-7f0b-4289-b8ff-8daac7576583 for this chassis. Oct 5 06:05:31 localhost ovn_controller[157794]: 2025-10-05T10:05:31Z|00217|binding|INFO|be295565-7f0b-4289-b8ff-8daac7576583: Claiming unknown Oct 5 06:05:31 localhost nova_compute[297021]: 2025-10-05 10:05:31.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:31 localhost systemd-udevd[329637]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:31.228 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-9dcbc6a6-80bf-4640-8053-0334429b2ccd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9dcbc6a6-80bf-4640-8053-0334429b2ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da9e9ed8-eab0-43fd-89d8-5e55dfc417d1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=be295565-7f0b-4289-b8ff-8daac7576583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:31.230 163434 INFO neutron.agent.ovn.metadata.agent [-] Port be295565-7f0b-4289-b8ff-8daac7576583 in datapath 9dcbc6a6-80bf-4640-8053-0334429b2ccd bound to our chassis#033[00m Oct 5 06:05:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:31.231 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9dcbc6a6-80bf-4640-8053-0334429b2ccd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:31.232 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f15e0ddb-d6fd-4135-abd4-bf1a7c73bb69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost ovn_controller[157794]: 2025-10-05T10:05:31Z|00218|binding|INFO|Setting lport be295565-7f0b-4289-b8ff-8daac7576583 ovn-installed in OVS Oct 5 06:05:31 localhost ovn_controller[157794]: 2025-10-05T10:05:31Z|00219|binding|INFO|Setting lport be295565-7f0b-4289-b8ff-8daac7576583 up in Southbound Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost nova_compute[297021]: 2025-10-05 10:05:31.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost journal[237931]: ethtool ioctl error on tapbe295565-7f: No such device Oct 5 06:05:31 localhost nova_compute[297021]: 2025-10-05 10:05:31.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:31 localhost nova_compute[297021]: 2025-10-05 10:05:31.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:32 localhost podman[329708]: Oct 5 06:05:32 localhost podman[329708]: 2025-10-05 10:05:32.097831091 +0000 UTC m=+0.092035421 container create 35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9dcbc6a6-80bf-4640-8053-0334429b2ccd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:05:32 localhost systemd[1]: Started libpod-conmon-35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb.scope. Oct 5 06:05:32 localhost podman[329708]: 2025-10-05 10:05:32.054042822 +0000 UTC m=+0.048247182 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:32 localhost systemd[1]: Started libcrun container. Oct 5 06:05:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a6aba4275ee2dcc8b5da9bce950d4bb5e7e148a7ef0445bc669b506fe23600/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:32 localhost podman[329708]: 2025-10-05 10:05:32.170493544 +0000 UTC m=+0.164697874 container init 35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9dcbc6a6-80bf-4640-8053-0334429b2ccd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:32 localhost podman[329708]: 2025-10-05 10:05:32.179668924 +0000 UTC m=+0.173873264 container start 35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9dcbc6a6-80bf-4640-8053-0334429b2ccd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:05:32 localhost dnsmasq[329726]: started, version 2.85 cachesize 150 Oct 5 06:05:32 localhost dnsmasq[329726]: DNS service limited to local subnets Oct 5 06:05:32 localhost dnsmasq[329726]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:32 localhost dnsmasq[329726]: warning: no upstream servers configured Oct 5 06:05:32 localhost dnsmasq-dhcp[329726]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:32 localhost dnsmasq[329726]: read /var/lib/neutron/dhcp/9dcbc6a6-80bf-4640-8053-0334429b2ccd/addn_hosts - 0 addresses Oct 5 06:05:32 localhost dnsmasq-dhcp[329726]: read /var/lib/neutron/dhcp/9dcbc6a6-80bf-4640-8053-0334429b2ccd/host Oct 5 06:05:32 localhost dnsmasq-dhcp[329726]: read /var/lib/neutron/dhcp/9dcbc6a6-80bf-4640-8053-0334429b2ccd/opts Oct 5 06:05:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:32.233 2 INFO neutron.agent.securitygroups_rpc [None req-340b3254-226d-430f-95be-e66d88dfe216 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:32.319 272040 INFO neutron.agent.dhcp.agent [None req-349254e1-6718-4894-b850-5399b16d57ee - - - - - -] DHCP configuration for ports {'851b6c7d-99d7-4782-b750-96fd67aed1ea'} is completed#033[00m Oct 5 06:05:32 localhost ovn_controller[157794]: 2025-10-05T10:05:32Z|00220|binding|INFO|Removing iface tapbe295565-7f ovn-installed in OVS Oct 5 06:05:32 localhost ovn_controller[157794]: 2025-10-05T10:05:32Z|00221|binding|INFO|Removing lport be295565-7f0b-4289-b8ff-8daac7576583 ovn-installed in OVS Oct 5 06:05:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:32.467 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 0002b018-cb05-45d4-a827-21c1dec26aff with type ""#033[00m Oct 5 06:05:32 localhost nova_compute[297021]: 2025-10-05 10:05:32.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:32.470 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-9dcbc6a6-80bf-4640-8053-0334429b2ccd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9dcbc6a6-80bf-4640-8053-0334429b2ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da9e9ed8-eab0-43fd-89d8-5e55dfc417d1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=be295565-7f0b-4289-b8ff-8daac7576583) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:32.472 163434 INFO neutron.agent.ovn.metadata.agent [-] Port be295565-7f0b-4289-b8ff-8daac7576583 in datapath 9dcbc6a6-80bf-4640-8053-0334429b2ccd unbound from our chassis#033[00m Oct 5 06:05:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:32.473 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9dcbc6a6-80bf-4640-8053-0334429b2ccd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:32.475 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b621e96f-b4de-4703-baf7-34c27664e072]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:32 localhost nova_compute[297021]: 2025-10-05 10:05:32.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:32 localhost dnsmasq[329726]: exiting on receipt of SIGTERM Oct 5 06:05:32 localhost podman[329744]: 2025-10-05 10:05:32.505286777 +0000 UTC m=+0.068310827 container kill 35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9dcbc6a6-80bf-4640-8053-0334429b2ccd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:05:32 localhost systemd[1]: libpod-35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb.scope: Deactivated successfully. Oct 5 06:05:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:05:32 localhost podman[329761]: 2025-10-05 10:05:32.590178962 +0000 UTC m=+0.062574200 container died 35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9dcbc6a6-80bf-4640-8053-0334429b2ccd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:32 localhost podman[329761]: 2025-10-05 10:05:32.63097491 +0000 UTC m=+0.103370148 container remove 35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9dcbc6a6-80bf-4640-8053-0334429b2ccd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:05:32 localhost ovn_controller[157794]: 2025-10-05T10:05:32Z|00222|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:32 localhost systemd[1]: libpod-conmon-35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb.scope: Deactivated successfully. Oct 5 06:05:32 localhost nova_compute[297021]: 2025-10-05 10:05:32.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:32 localhost kernel: device tapbe295565-7f left promiscuous mode Oct 5 06:05:32 localhost nova_compute[297021]: 2025-10-05 10:05:32.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:32 localhost nova_compute[297021]: 2025-10-05 10:05:32.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:32 localhost podman[329772]: 2025-10-05 10:05:32.678435059 +0000 UTC m=+0.136329303 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:05:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:32.700 272040 INFO neutron.agent.dhcp.agent [None req-1fc610b9-0147-4531-b2bc-2032ae99db40 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:32.701 272040 INFO neutron.agent.dhcp.agent [None req-1fc610b9-0147-4531-b2bc-2032ae99db40 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:32 localhost podman[329772]: 2025-10-05 10:05:32.743978439 +0000 UTC m=+0.201872633 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:32 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:05:33 localhost systemd[1]: var-lib-containers-storage-overlay-35a6aba4275ee2dcc8b5da9bce950d4bb5e7e148a7ef0445bc669b506fe23600-merged.mount: Deactivated successfully. Oct 5 06:05:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35659356d5562736e945d8e6d96fa53a0c19fcef1a9b0678374dcc83dcc199eb-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:33 localhost systemd[1]: run-netns-qdhcp\x2d9dcbc6a6\x2d80bf\x2d4640\x2d8053\x2d0334429b2ccd.mount: Deactivated successfully. Oct 5 06:05:34 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 5 06:05:34 localhost nova_compute[297021]: 2025-10-05 10:05:34.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:05:35 localhost podman[329811]: 2025-10-05 10:05:35.672796299 +0000 UTC m=+0.083914369 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:05:35 localhost podman[329811]: 2025-10-05 10:05:35.686971205 +0000 UTC m=+0.098089265 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:35 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:05:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.594212) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736594284, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 258, "total_data_size": 1196592, "memory_usage": 1222456, "flush_reason": "Manual Compaction"} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736601575, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 883208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27624, "largest_seqno": 28884, "table_properties": {"data_size": 878500, "index_size": 2179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12351, "raw_average_key_size": 21, "raw_value_size": 868278, "raw_average_value_size": 1504, "num_data_blocks": 95, "num_entries": 577, "num_filter_entries": 577, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658659, "oldest_key_time": 1759658659, "file_creation_time": 1759658736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 7406 microseconds, and 4048 cpu microseconds. Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.601620) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 883208 bytes OK Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.601646) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.605758) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.605782) EVENT_LOG_v1 {"time_micros": 1759658736605774, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.605804) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1190819, prev total WAL file size 1191143, number of live WAL files 2. Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.606602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303035' seq:72057594037927935, type:22 .. '6D6772737461740034323537' seq:0, type:0; will stop at (end) Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(862KB)], [51(15MB)] Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736606694, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16954239, "oldest_snapshot_seqno": -1} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 12427 keys, 15091664 bytes, temperature: kUnknown Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736702517, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 15091664, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15024280, "index_size": 35219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 336017, "raw_average_key_size": 27, "raw_value_size": 14816031, "raw_average_value_size": 1192, "num_data_blocks": 1306, "num_entries": 12427, "num_filter_entries": 12427, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.703006) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15091664 bytes Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.704876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.6 rd, 157.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(36.3) write-amplify(17.1) OK, records in: 12925, records dropped: 498 output_compression: NoCompression Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.704907) EVENT_LOG_v1 {"time_micros": 1759658736704893, "job": 30, "event": "compaction_finished", "compaction_time_micros": 95980, "compaction_time_cpu_micros": 48708, "output_level": 6, "num_output_files": 1, "total_output_size": 15091664, "num_input_records": 12925, "num_output_records": 12427, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736705247, "job": 30, "event": "table_file_deletion", "file_number": 53} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736708504, "job": 30, "event": "table_file_deletion", "file_number": 51} Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.606484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:37.026 272040 INFO neutron.agent.linux.ip_lib [None req-07cf5eea-1672-4b52-a8fc-782fd7c05e9f - - - - - -] Device tapc375056b-9b cannot be used as it has no MAC address#033[00m Oct 5 06:05:37 localhost nova_compute[297021]: 2025-10-05 10:05:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:37 localhost kernel: device tapc375056b-9b entered promiscuous mode Oct 5 06:05:37 localhost NetworkManager[5981]: [1759658737.0618] manager: (tapc375056b-9b): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Oct 5 06:05:37 localhost ovn_controller[157794]: 2025-10-05T10:05:37Z|00223|binding|INFO|Claiming lport c375056b-9b70-4249-b6d4-bf74f7c8ec9a for this chassis. Oct 5 06:05:37 localhost nova_compute[297021]: 2025-10-05 10:05:37.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:37 localhost ovn_controller[157794]: 2025-10-05T10:05:37Z|00224|binding|INFO|c375056b-9b70-4249-b6d4-bf74f7c8ec9a: Claiming unknown Oct 5 06:05:37 localhost systemd-udevd[329840]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:37.073 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2d:ec78/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-2021757a-11b5-4760-9e2a-c264808c4d2b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2021757a-11b5-4760-9e2a-c264808c4d2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df05faa3-5a85-42c3-ba81-417ab131cb76, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c375056b-9b70-4249-b6d4-bf74f7c8ec9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:37.076 163434 INFO neutron.agent.ovn.metadata.agent [-] Port c375056b-9b70-4249-b6d4-bf74f7c8ec9a in datapath 2021757a-11b5-4760-9e2a-c264808c4d2b bound to our chassis#033[00m Oct 5 06:05:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:37.082 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6830dbd5-c87d-41be-a59d-cbcb6c90909e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:05:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:37.082 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2021757a-11b5-4760-9e2a-c264808c4d2b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:37.083 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0ca9a8-2c43-44ba-b35d-33dba723af6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost ovn_controller[157794]: 2025-10-05T10:05:37Z|00225|binding|INFO|Setting lport c375056b-9b70-4249-b6d4-bf74f7c8ec9a ovn-installed in OVS Oct 5 06:05:37 localhost ovn_controller[157794]: 2025-10-05T10:05:37Z|00226|binding|INFO|Setting lport c375056b-9b70-4249-b6d4-bf74f7c8ec9a up in Southbound Oct 5 06:05:37 localhost nova_compute[297021]: 2025-10-05 10:05:37.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost journal[237931]: ethtool ioctl error on tapc375056b-9b: No such device Oct 5 06:05:37 localhost nova_compute[297021]: 2025-10-05 10:05:37.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:37 localhost nova_compute[297021]: 2025-10-05 10:05:37.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:37 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:37.216 2 INFO neutron.agent.securitygroups_rpc [None req-8727ea00-c2b3-4681-bd7e-288be6db83c4 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:05:37 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:37.765 2 INFO neutron.agent.securitygroups_rpc [None req-66916715-b26f-4f18-b705-715c7650e30e fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:37.806 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:37 localhost podman[329911]: Oct 5 06:05:37 localhost podman[329911]: 2025-10-05 10:05:37.966673375 +0000 UTC m=+0.085874632 container create f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:05:38 localhost systemd[1]: Started libpod-conmon-f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4.scope. Oct 5 06:05:38 localhost systemd[1]: tmp-crun.I862Sh.mount: Deactivated successfully. Oct 5 06:05:38 localhost podman[329911]: 2025-10-05 10:05:37.92634221 +0000 UTC m=+0.045543457 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:38 localhost systemd[1]: Started libcrun container. Oct 5 06:05:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6b52d395b273416bd802024c738faa0f3c9aef29d124ca4628701eb617a6f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:38 localhost podman[329911]: 2025-10-05 10:05:38.059437334 +0000 UTC m=+0.178638591 container init f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:05:38 localhost podman[329911]: 2025-10-05 10:05:38.073556968 +0000 UTC m=+0.192758235 container start f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:05:38 localhost dnsmasq[329929]: started, version 2.85 cachesize 150 Oct 5 06:05:38 localhost dnsmasq[329929]: DNS service limited to local subnets Oct 5 06:05:38 localhost dnsmasq[329929]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:38 localhost dnsmasq[329929]: warning: no upstream servers configured Oct 5 06:05:38 localhost dnsmasq[329929]: read /var/lib/neutron/dhcp/2021757a-11b5-4760-9e2a-c264808c4d2b/addn_hosts - 0 addresses Oct 5 06:05:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:38.113 272040 INFO neutron.agent.dhcp.agent [None req-07cf5eea-1672-4b52-a8fc-782fd7c05e9f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:36Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bb295776-65aa-4435-bf79-e0a89cbf9508, ip_allocation=immediate, mac_address=fa:16:3e:06:ca:9d, name=tempest-NetworksIpV6TestAttrs-51133540, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:34Z, description=, dns_domain=, id=2021757a-11b5-4760-9e2a-c264808c4d2b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksIpV6TestAttrs-test-network-159240668, port_security_enabled=True, project_id=b19cb2ed6df34a0dad27155d804f6680, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27014, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1422, status=ACTIVE, subnets=['8c8a2975-0fcb-4830-8af0-29efea5b8e90'], tags=[], tenant_id=b19cb2ed6df34a0dad27155d804f6680, updated_at=2025-10-05T10:05:35Z, vlan_transparent=None, network_id=2021757a-11b5-4760-9e2a-c264808c4d2b, port_security_enabled=True, project_id=b19cb2ed6df34a0dad27155d804f6680, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['587ef845-3f12-4f64-8d07-19635386ce1f'], standard_attr_id=1441, status=DOWN, tags=[], tenant_id=b19cb2ed6df34a0dad27155d804f6680, updated_at=2025-10-05T10:05:36Z on network 2021757a-11b5-4760-9e2a-c264808c4d2b#033[00m Oct 5 06:05:38 localhost dnsmasq[329929]: read /var/lib/neutron/dhcp/2021757a-11b5-4760-9e2a-c264808c4d2b/addn_hosts - 1 addresses Oct 5 06:05:38 localhost podman[329947]: 2025-10-05 10:05:38.279356557 +0000 UTC m=+0.059562128 container kill f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:05:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:38.279 272040 INFO neutron.agent.dhcp.agent [None req-e4b558f3-915f-4c57-94f3-25b91db3915a - - - - - -] DHCP configuration for ports {'b3898182-e6d4-408d-b204-f92414207fd0'} is completed#033[00m Oct 5 06:05:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:38.529 272040 INFO neutron.agent.dhcp.agent [None req-59acc33e-b208-4ef4-8ddc-b6b4a7921597 - - - - - -] DHCP configuration for ports {'bb295776-65aa-4435-bf79-e0a89cbf9508'} is completed#033[00m Oct 5 06:05:38 localhost dnsmasq[329929]: exiting on receipt of SIGTERM Oct 5 06:05:38 localhost podman[329984]: 2025-10-05 10:05:38.66831509 +0000 UTC m=+0.057957015 container kill f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:38 localhost systemd[1]: libpod-f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4.scope: Deactivated successfully. Oct 5 06:05:38 localhost podman[329997]: 2025-10-05 10:05:38.734911129 +0000 UTC m=+0.056164146 container died f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:05:38 localhost podman[329997]: 2025-10-05 10:05:38.769356574 +0000 UTC m=+0.090609541 container cleanup f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:05:38 localhost systemd[1]: libpod-conmon-f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4.scope: Deactivated successfully. Oct 5 06:05:38 localhost podman[330004]: 2025-10-05 10:05:38.831758729 +0000 UTC m=+0.137563887 container remove f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2021757a-11b5-4760-9e2a-c264808c4d2b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:38 localhost nova_compute[297021]: 2025-10-05 10:05:38.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:38 localhost ovn_controller[157794]: 2025-10-05T10:05:38Z|00227|binding|INFO|Releasing lport c375056b-9b70-4249-b6d4-bf74f7c8ec9a from this chassis (sb_readonly=0) Oct 5 06:05:38 localhost ovn_controller[157794]: 2025-10-05T10:05:38Z|00228|binding|INFO|Setting lport c375056b-9b70-4249-b6d4-bf74f7c8ec9a down in Southbound Oct 5 06:05:38 localhost kernel: device tapc375056b-9b left promiscuous mode Oct 5 06:05:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:38.855 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2d:ec78/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-2021757a-11b5-4760-9e2a-c264808c4d2b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2021757a-11b5-4760-9e2a-c264808c4d2b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df05faa3-5a85-42c3-ba81-417ab131cb76, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c375056b-9b70-4249-b6d4-bf74f7c8ec9a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:38.857 163434 INFO neutron.agent.ovn.metadata.agent [-] Port c375056b-9b70-4249-b6d4-bf74f7c8ec9a in datapath 2021757a-11b5-4760-9e2a-c264808c4d2b unbound from our chassis#033[00m Oct 5 06:05:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:38.860 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2021757a-11b5-4760-9e2a-c264808c4d2b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:38.861 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e7f339-fa80-49b2-98bd-1f1d606951b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:38 localhost nova_compute[297021]: 2025-10-05 10:05:38.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:38 localhost podman[330024]: 2025-10-05 10:05:38.873201534 +0000 UTC m=+0.092720098 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc.) Oct 5 06:05:38 localhost podman[330024]: 2025-10-05 10:05:38.885830447 +0000 UTC m=+0.105349051 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-type=git) Oct 5 06:05:38 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:05:38 localhost systemd[1]: tmp-crun.qGuTon.mount: Deactivated successfully. Oct 5 06:05:38 localhost systemd[1]: var-lib-containers-storage-overlay-8f6b52d395b273416bd802024c738faa0f3c9aef29d124ca4628701eb617a6f3-merged.mount: Deactivated successfully. Oct 5 06:05:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f52715e221afb20d0f075ab00c15dcb1b013ae55a894c612f53ff45c3d84b8a4-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:39 localhost systemd[1]: run-netns-qdhcp\x2d2021757a\x2d11b5\x2d4760\x2d9e2a\x2dc264808c4d2b.mount: Deactivated successfully. Oct 5 06:05:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:39.117 272040 INFO neutron.agent.dhcp.agent [None req-26c73f1f-ee86-406e-844a-fdfe44833682 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:39 localhost nova_compute[297021]: 2025-10-05 10:05:39.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:42 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:42.504 2 INFO neutron.agent.securitygroups_rpc [None req-e2152d4c-0acb-4b85-9281-c419e6ddd1b9 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:05:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:43.208 272040 INFO neutron.agent.linux.ip_lib [None req-40877a0e-62b1-4b97-bebe-f4057b362a17 - - - - - -] Device tap448e35f6-ec cannot be used as it has no MAC address#033[00m Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost podman[330050]: 2025-10-05 10:05:43.264802121 +0000 UTC m=+0.112487387 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:05:43 localhost kernel: device tap448e35f6-ec entered promiscuous mode Oct 5 06:05:43 localhost NetworkManager[5981]: [1759658743.2707] manager: (tap448e35f6-ec): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Oct 5 06:05:43 localhost ovn_controller[157794]: 2025-10-05T10:05:43Z|00229|binding|INFO|Claiming lport 448e35f6-ec92-4253-a94a-cd8f7d67285c for this chassis. Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost podman[330050]: 2025-10-05 10:05:43.276004595 +0000 UTC m=+0.123689861 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:05:43 localhost ovn_controller[157794]: 2025-10-05T10:05:43Z|00230|binding|INFO|448e35f6-ec92-4253-a94a-cd8f7d67285c: Claiming unknown Oct 5 06:05:43 localhost systemd-udevd[330082]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:43.287 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:914d/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-28bfede0-541b-4406-9cfd-f3e5a5daff0a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28bfede0-541b-4406-9cfd-f3e5a5daff0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21d5c079-a3f9-4a54-a1d6-fb674490aaf6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=448e35f6-ec92-4253-a94a-cd8f7d67285c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:43.289 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 448e35f6-ec92-4253-a94a-cd8f7d67285c in datapath 28bfede0-541b-4406-9cfd-f3e5a5daff0a bound to our chassis#033[00m Oct 5 06:05:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:43.292 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1911adc7-1eee-45cf-817c-f586263b901c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:05:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:43.292 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28bfede0-541b-4406-9cfd-f3e5a5daff0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:43.293 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[02445591-015f-4863-9d7a-1b4cdb137c96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:43 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost ovn_controller[157794]: 2025-10-05T10:05:43Z|00231|binding|INFO|Setting lport 448e35f6-ec92-4253-a94a-cd8f7d67285c ovn-installed in OVS Oct 5 06:05:43 localhost ovn_controller[157794]: 2025-10-05T10:05:43Z|00232|binding|INFO|Setting lport 448e35f6-ec92-4253-a94a-cd8f7d67285c up in Southbound Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost journal[237931]: ethtool ioctl error on tap448e35f6-ec: No such device Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost nova_compute[297021]: 2025-10-05 10:05:43.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:44 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:44.010 2 INFO neutron.agent.securitygroups_rpc [None req-02192b33-0053-4542-85f0-0b2170402ee7 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:05:44 localhost podman[330153]: Oct 5 06:05:44 localhost podman[330153]: 2025-10-05 10:05:44.168369939 +0000 UTC m=+0.091635260 container create 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:05:44 localhost systemd[1]: Started libpod-conmon-2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3.scope. Oct 5 06:05:44 localhost systemd[1]: tmp-crun.nhl6lv.mount: Deactivated successfully. Oct 5 06:05:44 localhost podman[330153]: 2025-10-05 10:05:44.123279655 +0000 UTC m=+0.046545016 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:44 localhost systemd[1]: Started libcrun container. Oct 5 06:05:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3b6429123c3255ef3c6f0d8f95397f70ec0d94f6ed4c4a4893370fd06094e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:44 localhost podman[330153]: 2025-10-05 10:05:44.256450272 +0000 UTC m=+0.179715593 container init 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:05:44 localhost podman[330153]: 2025-10-05 10:05:44.265537728 +0000 UTC m=+0.188803049 container start 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:44 localhost dnsmasq[330171]: started, version 2.85 cachesize 150 Oct 5 06:05:44 localhost dnsmasq[330171]: DNS service limited to local subnets Oct 5 06:05:44 localhost dnsmasq[330171]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:44 localhost dnsmasq[330171]: warning: no upstream servers configured Oct 5 06:05:44 localhost dnsmasq-dhcp[330171]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:44 localhost dnsmasq[330171]: read /var/lib/neutron/dhcp/28bfede0-541b-4406-9cfd-f3e5a5daff0a/addn_hosts - 0 addresses Oct 5 06:05:44 localhost dnsmasq-dhcp[330171]: read /var/lib/neutron/dhcp/28bfede0-541b-4406-9cfd-f3e5a5daff0a/host Oct 5 06:05:44 localhost dnsmasq-dhcp[330171]: read /var/lib/neutron/dhcp/28bfede0-541b-4406-9cfd-f3e5a5daff0a/opts Oct 5 06:05:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:44.324 272040 INFO neutron.agent.dhcp.agent [None req-40877a0e-62b1-4b97-bebe-f4057b362a17 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b8a49a8b-9e8d-4680-a554-f42dcbbc0d57, ip_allocation=immediate, mac_address=fa:16:3e:13:1a:dc, name=tempest-NetworksIpV6TestAttrs-1567749488, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:39Z, description=, dns_domain=, id=28bfede0-541b-4406-9cfd-f3e5a5daff0a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksIpV6TestAttrs-test-network-918965159, port_security_enabled=True, project_id=b19cb2ed6df34a0dad27155d804f6680, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22005, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1451, status=ACTIVE, subnets=['b35ebe2b-965d-431d-9078-ef497fde0249'], tags=[], tenant_id=b19cb2ed6df34a0dad27155d804f6680, updated_at=2025-10-05T10:05:41Z, vlan_transparent=None, network_id=28bfede0-541b-4406-9cfd-f3e5a5daff0a, port_security_enabled=True, project_id=b19cb2ed6df34a0dad27155d804f6680, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['587ef845-3f12-4f64-8d07-19635386ce1f'], standard_attr_id=1470, status=DOWN, tags=[], tenant_id=b19cb2ed6df34a0dad27155d804f6680, updated_at=2025-10-05T10:05:43Z on network 28bfede0-541b-4406-9cfd-f3e5a5daff0a#033[00m Oct 5 06:05:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:44.415 272040 INFO neutron.agent.dhcp.agent [None req-989fa7ec-7b73-4758-9865-e79a3c4dd1d4 - - - - - -] DHCP configuration for ports {'00df7f6a-5dfd-4de1-a320-54f55051c0c5'} is completed#033[00m Oct 5 06:05:44 localhost nova_compute[297021]: 2025-10-05 10:05:44.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:44 localhost dnsmasq[330171]: read /var/lib/neutron/dhcp/28bfede0-541b-4406-9cfd-f3e5a5daff0a/addn_hosts - 1 addresses Oct 5 06:05:44 localhost dnsmasq-dhcp[330171]: read /var/lib/neutron/dhcp/28bfede0-541b-4406-9cfd-f3e5a5daff0a/host Oct 5 06:05:44 localhost dnsmasq-dhcp[330171]: read /var/lib/neutron/dhcp/28bfede0-541b-4406-9cfd-f3e5a5daff0a/opts Oct 5 06:05:44 localhost podman[330189]: 2025-10-05 10:05:44.524312506 +0000 UTC m=+0.069711705 container kill 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:05:44 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:44.799 272040 INFO neutron.agent.dhcp.agent [None req-ca98f0b9-765c-4504-9ed5-7fa45633f460 - - - - - -] DHCP configuration for ports {'b8a49a8b-9e8d-4680-a554-f42dcbbc0d57'} is completed#033[00m Oct 5 06:05:46 localhost dnsmasq[330171]: exiting on receipt of SIGTERM Oct 5 06:05:46 localhost podman[330227]: 2025-10-05 10:05:46.027508299 +0000 UTC m=+0.060808393 container kill 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:05:46 localhost systemd[1]: libpod-2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3.scope: Deactivated successfully. Oct 5 06:05:46 localhost podman[330240]: 2025-10-05 10:05:46.095450904 +0000 UTC m=+0.056131885 container died 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:05:46 localhost podman[330240]: 2025-10-05 10:05:46.130840875 +0000 UTC m=+0.091521816 container cleanup 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:05:46 localhost systemd[1]: libpod-conmon-2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3.scope: Deactivated successfully. Oct 5 06:05:46 localhost systemd[1]: var-lib-containers-storage-overlay-f8d3b6429123c3255ef3c6f0d8f95397f70ec0d94f6ed4c4a4893370fd06094e-merged.mount: Deactivated successfully. Oct 5 06:05:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:46 localhost podman[330248]: 2025-10-05 10:05:46.185572291 +0000 UTC m=+0.134534044 container remove 2924b65a9daea43feaf3abd69dc4e4f159bc3998164518189fb9cf23df4791d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-28bfede0-541b-4406-9cfd-f3e5a5daff0a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:05:46 localhost ovn_controller[157794]: 2025-10-05T10:05:46Z|00233|binding|INFO|Releasing lport 448e35f6-ec92-4253-a94a-cd8f7d67285c from this chassis (sb_readonly=0) Oct 5 06:05:46 localhost ovn_controller[157794]: 2025-10-05T10:05:46Z|00234|binding|INFO|Setting lport 448e35f6-ec92-4253-a94a-cd8f7d67285c down in Southbound Oct 5 06:05:46 localhost kernel: device tap448e35f6-ec left promiscuous mode Oct 5 06:05:46 localhost nova_compute[297021]: 2025-10-05 10:05:46.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:46.209 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:914d/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-28bfede0-541b-4406-9cfd-f3e5a5daff0a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28bfede0-541b-4406-9cfd-f3e5a5daff0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21d5c079-a3f9-4a54-a1d6-fb674490aaf6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=448e35f6-ec92-4253-a94a-cd8f7d67285c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:46.210 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 448e35f6-ec92-4253-a94a-cd8f7d67285c in datapath 28bfede0-541b-4406-9cfd-f3e5a5daff0a unbound from our chassis#033[00m Oct 5 06:05:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:46.213 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28bfede0-541b-4406-9cfd-f3e5a5daff0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:46.215 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[bc885124-7a0b-4a37-8676-a93ed7a48a3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:46 localhost nova_compute[297021]: 2025-10-05 10:05:46.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:46 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:46.512 272040 INFO neutron.agent.dhcp.agent [None req-baa1a7de-a1f5-498b-971a-3f40ef13ab0c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:46 localhost systemd[1]: run-netns-qdhcp\x2d28bfede0\x2d541b\x2d4406\x2d9cfd\x2df3e5a5daff0a.mount: Deactivated successfully. Oct 5 06:05:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:05:46.970 2 INFO neutron.agent.securitygroups_rpc [None req-f585fb86-eb3a-4f02-92d6-e24b1dd983df fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:47 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:47.020 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:05:48 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:48.607 272040 INFO neutron.agent.linux.ip_lib [None req-d6e02007-cb94-4c2e-b57a-c3682bdd074a - - - - - -] Device tapefe4c880-c4 cannot be used as it has no MAC address#033[00m Oct 5 06:05:48 localhost podman[330274]: 2025-10-05 10:05:48.628944227 +0000 UTC m=+0.088657128 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:05:48 localhost nova_compute[297021]: 2025-10-05 10:05:48.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:48 localhost podman[330274]: 2025-10-05 10:05:48.638160518 +0000 UTC m=+0.097873369 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:05:48 localhost kernel: device tapefe4c880-c4 entered promiscuous mode Oct 5 06:05:48 localhost NetworkManager[5981]: [1759658748.6423] manager: (tapefe4c880-c4): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Oct 5 06:05:48 localhost nova_compute[297021]: 2025-10-05 10:05:48.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:48 localhost systemd-udevd[330304]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:48 localhost ovn_controller[157794]: 2025-10-05T10:05:48Z|00235|binding|INFO|Claiming lport efe4c880-c4ce-4ab3-979f-d5b97bc71b68 for this chassis. Oct 5 06:05:48 localhost ovn_controller[157794]: 2025-10-05T10:05:48Z|00236|binding|INFO|efe4c880-c4ce-4ab3-979f-d5b97bc71b68: Claiming unknown Oct 5 06:05:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:48.657 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-680567b1-9b84-4077-a926-3629810550c9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-680567b1-9b84-4077-a926-3629810550c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dacd75da-185a-4233-9770-d81da152ca4c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=efe4c880-c4ce-4ab3-979f-d5b97bc71b68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:48.659 163434 INFO neutron.agent.ovn.metadata.agent [-] Port efe4c880-c4ce-4ab3-979f-d5b97bc71b68 in datapath 680567b1-9b84-4077-a926-3629810550c9 bound to our chassis#033[00m Oct 5 06:05:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:48.661 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 680567b1-9b84-4077-a926-3629810550c9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:48.661 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[122d4ab2-e443-48a7-ad20-ed8fa9281c84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:48 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost ovn_controller[157794]: 2025-10-05T10:05:48Z|00237|binding|INFO|Setting lport efe4c880-c4ce-4ab3-979f-d5b97bc71b68 ovn-installed in OVS Oct 5 06:05:48 localhost ovn_controller[157794]: 2025-10-05T10:05:48Z|00238|binding|INFO|Setting lport efe4c880-c4ce-4ab3-979f-d5b97bc71b68 up in Southbound Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost nova_compute[297021]: 2025-10-05 10:05:48.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost journal[237931]: ethtool ioctl error on tapefe4c880-c4: No such device Oct 5 06:05:48 localhost nova_compute[297021]: 2025-10-05 10:05:48.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:48 localhost nova_compute[297021]: 2025-10-05 10:05:48.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:49 localhost nova_compute[297021]: 2025-10-05 10:05:49.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:49 localhost podman[330376]: Oct 5 06:05:49 localhost podman[330376]: 2025-10-05 10:05:49.659886235 +0000 UTC m=+0.081923626 container create 9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-680567b1-9b84-4077-a926-3629810550c9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:05:49 localhost systemd[1]: Started libpod-conmon-9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3.scope. Oct 5 06:05:49 localhost podman[330376]: 2025-10-05 10:05:49.613565907 +0000 UTC m=+0.035603278 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:49 localhost systemd[1]: Started libcrun container. Oct 5 06:05:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdf7978942589f266d1b04d907484aea7690b4991548adf7b9451c1b72096d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:49 localhost podman[330376]: 2025-10-05 10:05:49.733788182 +0000 UTC m=+0.155825523 container init 9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-680567b1-9b84-4077-a926-3629810550c9, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:49 localhost podman[330376]: 2025-10-05 10:05:49.744621707 +0000 UTC m=+0.166659048 container start 9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-680567b1-9b84-4077-a926-3629810550c9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:05:49 localhost dnsmasq[330395]: started, version 2.85 cachesize 150 Oct 5 06:05:49 localhost dnsmasq[330395]: DNS service limited to local subnets Oct 5 06:05:49 localhost dnsmasq[330395]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:49 localhost dnsmasq[330395]: warning: no upstream servers configured Oct 5 06:05:49 localhost dnsmasq-dhcp[330395]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:49 localhost dnsmasq[330395]: read /var/lib/neutron/dhcp/680567b1-9b84-4077-a926-3629810550c9/addn_hosts - 0 addresses Oct 5 06:05:49 localhost dnsmasq-dhcp[330395]: read /var/lib/neutron/dhcp/680567b1-9b84-4077-a926-3629810550c9/host Oct 5 06:05:49 localhost dnsmasq-dhcp[330395]: read /var/lib/neutron/dhcp/680567b1-9b84-4077-a926-3629810550c9/opts Oct 5 06:05:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:49.898 272040 INFO neutron.agent.dhcp.agent [None req-9cc30067-da4a-4409-906c-94017b5c6d68 - - - - - -] DHCP configuration for ports {'db0f4639-00b1-4e6c-a53c-d1002af28942'} is completed#033[00m Oct 5 06:05:50 localhost nova_compute[297021]: 2025-10-05 10:05:50.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:51.000 272040 INFO neutron.agent.linux.ip_lib [None req-cf95a0e9-f79a-4ebf-8103-fda753fe05f6 - - - - - -] Device tapefa7523e-ea cannot be used as it has no MAC address#033[00m Oct 5 06:05:51 localhost nova_compute[297021]: 2025-10-05 10:05:51.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost kernel: device tapefa7523e-ea entered promiscuous mode Oct 5 06:05:51 localhost ovn_controller[157794]: 2025-10-05T10:05:51Z|00239|binding|INFO|Claiming lport efa7523e-ea87-4e5e-95b0-a245e174317d for this chassis. Oct 5 06:05:51 localhost ovn_controller[157794]: 2025-10-05T10:05:51Z|00240|binding|INFO|efa7523e-ea87-4e5e-95b0-a245e174317d: Claiming unknown Oct 5 06:05:51 localhost NetworkManager[5981]: [1759658751.0304] manager: (tapefa7523e-ea): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Oct 5 06:05:51 localhost nova_compute[297021]: 2025-10-05 10:05:51.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:51.040 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb1:2c8f/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a471572c-711a-4769-8ce1-c11ff69ed94c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=efa7523e-ea87-4e5e-95b0-a245e174317d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:51.042 163434 INFO neutron.agent.ovn.metadata.agent [-] Port efa7523e-ea87-4e5e-95b0-a245e174317d in datapath 7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b bound to our chassis#033[00m Oct 5 06:05:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:51.047 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9aaee08b-975e-4710-9304-a0c4a907c90b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:05:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:51.048 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:51.049 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[285c860f-cf51-42f6-b8f2-68c0f30a13eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:51 localhost ovn_controller[157794]: 2025-10-05T10:05:51Z|00241|binding|INFO|Setting lport efa7523e-ea87-4e5e-95b0-a245e174317d ovn-installed in OVS Oct 5 06:05:51 localhost ovn_controller[157794]: 2025-10-05T10:05:51Z|00242|binding|INFO|Setting lport efa7523e-ea87-4e5e-95b0-a245e174317d up in Southbound Oct 5 06:05:51 localhost nova_compute[297021]: 2025-10-05 10:05:51.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost nova_compute[297021]: 2025-10-05 10:05:51.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost nova_compute[297021]: 2025-10-05 10:05:51.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost sshd[330423]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:05:51 localhost podman[248506]: time="2025-10-05T10:05:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:05:51 localhost podman[248506]: @ - - [05/Oct/2025:10:05:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147495 "" "Go-http-client/1.1" Oct 5 06:05:51 localhost podman[248506]: @ - - [05/Oct/2025:10:05:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19848 "" "Go-http-client/1.1" Oct 5 06:05:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:51 localhost podman[330459]: Oct 5 06:05:51 localhost podman[330459]: 2025-10-05 10:05:51.939651388 +0000 UTC m=+0.092868213 container create 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:05:51 localhost systemd[1]: Started libpod-conmon-1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965.scope. Oct 5 06:05:51 localhost podman[330459]: 2025-10-05 10:05:51.897503904 +0000 UTC m=+0.050720739 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:52 localhost systemd[1]: Started libcrun container. Oct 5 06:05:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc450612a7d7c4cf557c54ec1ae38cee6d6538960a42d2d4eb440b867e72244/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:52 localhost podman[330459]: 2025-10-05 10:05:52.023414333 +0000 UTC m=+0.176631158 container init 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:52 localhost openstack_network_exporter[250601]: ERROR 10:05:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:52 localhost openstack_network_exporter[250601]: ERROR 10:05:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:52 localhost openstack_network_exporter[250601]: ERROR 10:05:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:05:52 localhost openstack_network_exporter[250601]: ERROR 10:05:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:05:52 localhost openstack_network_exporter[250601]: Oct 5 06:05:52 localhost openstack_network_exporter[250601]: ERROR 10:05:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:05:52 localhost openstack_network_exporter[250601]: Oct 5 06:05:52 localhost podman[330459]: 2025-10-05 10:05:52.034761581 +0000 UTC m=+0.187978406 container start 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:05:52 localhost dnsmasq[330478]: started, version 2.85 cachesize 150 Oct 5 06:05:52 localhost dnsmasq[330478]: DNS service limited to local subnets Oct 5 06:05:52 localhost dnsmasq[330478]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:52 localhost dnsmasq[330478]: warning: no upstream servers configured Oct 5 06:05:52 localhost dnsmasq[330478]: read /var/lib/neutron/dhcp/7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b/addn_hosts - 0 addresses Oct 5 06:05:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:52.228 272040 INFO neutron.agent.dhcp.agent [None req-617ede84-3a42-4144-8528-f147d239dfc7 - - - - - -] DHCP configuration for ports {'2dbae4f5-9e1f-4da9-846a-81557b8ced61'} is completed#033[00m Oct 5 06:05:52 localhost dnsmasq[330478]: exiting on receipt of SIGTERM Oct 5 06:05:52 localhost systemd[1]: libpod-1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965.scope: Deactivated successfully. Oct 5 06:05:52 localhost podman[330496]: 2025-10-05 10:05:52.382613288 +0000 UTC m=+0.050078111 container kill 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:05:52 localhost nova_compute[297021]: 2025-10-05 10:05:52.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:52 localhost podman[330509]: 2025-10-05 10:05:52.469027345 +0000 UTC m=+0.072424088 container died 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:52 localhost podman[330509]: 2025-10-05 10:05:52.502172565 +0000 UTC m=+0.105569278 container cleanup 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:05:52 localhost systemd[1]: libpod-conmon-1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965.scope: Deactivated successfully. Oct 5 06:05:52 localhost ovn_controller[157794]: 2025-10-05T10:05:52Z|00243|binding|INFO|Removing iface tapefa7523e-ea ovn-installed in OVS Oct 5 06:05:52 localhost ovn_controller[157794]: 2025-10-05T10:05:52Z|00244|binding|INFO|Removing lport efa7523e-ea87-4e5e-95b0-a245e174317d ovn-installed in OVS Oct 5 06:05:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:52.522 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9aaee08b-975e-4710-9304-a0c4a907c90b with type ""#033[00m Oct 5 06:05:52 localhost nova_compute[297021]: 2025-10-05 10:05:52.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:52.523 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb1:2c8f/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a471572c-711a-4769-8ce1-c11ff69ed94c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=efa7523e-ea87-4e5e-95b0-a245e174317d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:52.526 163434 INFO neutron.agent.ovn.metadata.agent [-] Port efa7523e-ea87-4e5e-95b0-a245e174317d in datapath 7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b unbound from our chassis#033[00m Oct 5 06:05:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:52.527 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:52 localhost podman[330516]: 2025-10-05 10:05:52.528979434 +0000 UTC m=+0.116798244 container remove 1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b97b2bb-6d02-413e-bd7e-61bd2cc23d0b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:52 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:52.528 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[acedb3ee-49e1-422c-ae51-cebf0df72648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:52 localhost nova_compute[297021]: 2025-10-05 10:05:52.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:52 localhost nova_compute[297021]: 2025-10-05 10:05:52.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:52 localhost kernel: device tapefa7523e-ea left promiscuous mode Oct 5 06:05:52 localhost nova_compute[297021]: 2025-10-05 10:05:52.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:52.570 272040 INFO neutron.agent.dhcp.agent [None req-b114ff38-645e-43ca-b892-f9fdfd3623b1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:52.571 272040 INFO neutron.agent.dhcp.agent [None req-b114ff38-645e-43ca-b892-f9fdfd3623b1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:52 localhost podman[330517]: 2025-10-05 10:05:52.570633574 +0000 UTC m=+0.146012636 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:05:52 localhost podman[330521]: 2025-10-05 10:05:52.614801674 +0000 UTC m=+0.191657366 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:05:52 localhost podman[330521]: 2025-10-05 10:05:52.632837624 +0000 UTC m=+0.209693276 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:52 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:05:52 localhost podman[330517]: 2025-10-05 10:05:52.685879175 +0000 UTC m=+0.261258207 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:52 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:05:52 localhost systemd[1]: var-lib-containers-storage-overlay-fcc450612a7d7c4cf557c54ec1ae38cee6d6538960a42d2d4eb440b867e72244-merged.mount: Deactivated successfully. Oct 5 06:05:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d88f3a01a7585b4011234c4086808adfb9c9695d1b2f3a6863de7d4de90b965-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:52 localhost systemd[1]: run-netns-qdhcp\x2d7b97b2bb\x2d6d02\x2d413e\x2dbd7e\x2d61bd2cc23d0b.mount: Deactivated successfully. Oct 5 06:05:53 localhost ovn_controller[157794]: 2025-10-05T10:05:53Z|00245|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:53 localhost nova_compute[297021]: 2025-10-05 10:05:53.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:53 localhost ovn_controller[157794]: 2025-10-05T10:05:53Z|00246|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:53 localhost nova_compute[297021]: 2025-10-05 10:05:53.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.303 272040 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.513 272040 INFO neutron.agent.dhcp.agent [None req-ee108210-465c-4318-bfe8-f64120430308 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.515 272040 INFO neutron.agent.dhcp.agent [-] Starting network 2021757a-11b5-4760-9e2a-c264808c4d2b dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.515 272040 INFO neutron.agent.dhcp.agent [-] Finished network 2021757a-11b5-4760-9e2a-c264808c4d2b dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.516 272040 INFO neutron.agent.dhcp.agent [-] Starting network 28bfede0-541b-4406-9cfd-f3e5a5daff0a dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.516 272040 INFO neutron.agent.dhcp.agent [-] Finished network 28bfede0-541b-4406-9cfd-f3e5a5daff0a dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.516 272040 INFO neutron.agent.dhcp.agent [-] Starting network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.517 272040 INFO neutron.agent.dhcp.agent [-] Finished network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.517 272040 INFO neutron.agent.dhcp.agent [-] Starting network e415af9b-2aaa-4639-8059-9ef121538b80 dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.518 272040 INFO neutron.agent.dhcp.agent [-] Finished network e415af9b-2aaa-4639-8059-9ef121538b80 dhcp configuration#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.518 272040 INFO neutron.agent.dhcp.agent [None req-ee108210-465c-4318-bfe8-f64120430308 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:05:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:54.519 272040 INFO neutron.agent.dhcp.agent [None req-d4663395-eff0-4d0b-87fb-e8df76645673 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:54 localhost nova_compute[297021]: 2025-10-05 10:05:54.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost ovn_controller[157794]: 2025-10-05T10:05:55Z|00247|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:05:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:55.310 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:55 localhost nova_compute[297021]: 2025-10-05 10:05:55.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:58.565 272040 INFO neutron.agent.linux.ip_lib [None req-9fd4f097-5b16-494f-8145-48bf94f5bf67 - - - - - -] Device tap7afb25a0-4d cannot be used as it has no MAC address#033[00m Oct 5 06:05:58 localhost nova_compute[297021]: 2025-10-05 10:05:58.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:58 localhost kernel: device tap7afb25a0-4d entered promiscuous mode Oct 5 06:05:58 localhost NetworkManager[5981]: [1759658758.6097] manager: (tap7afb25a0-4d): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Oct 5 06:05:58 localhost nova_compute[297021]: 2025-10-05 10:05:58.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:58 localhost systemd-udevd[330579]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:58 localhost ovn_controller[157794]: 2025-10-05T10:05:58Z|00248|binding|INFO|Claiming lport 7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3 for this chassis. Oct 5 06:05:58 localhost ovn_controller[157794]: 2025-10-05T10:05:58Z|00249|binding|INFO|7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3: Claiming unknown Oct 5 06:05:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:58.628 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-efa5b15e-722c-4d87-a918-18fadce0c12e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa5b15e-722c-4d87-a918-18fadce0c12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e1138b1-6a8e-40d9-98e5-b8580ac8c37e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:58.630 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3 in datapath efa5b15e-722c-4d87-a918-18fadce0c12e bound to our chassis#033[00m Oct 5 06:05:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:58.632 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network efa5b15e-722c-4d87-a918-18fadce0c12e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:58.633 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e21e0fd8-f508-4c88-8890-80ee1a27287a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost ovn_controller[157794]: 2025-10-05T10:05:58Z|00250|binding|INFO|Setting lport 7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3 ovn-installed in OVS Oct 5 06:05:58 localhost ovn_controller[157794]: 2025-10-05T10:05:58Z|00251|binding|INFO|Setting lport 7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3 up in Southbound Oct 5 06:05:58 localhost nova_compute[297021]: 2025-10-05 10:05:58.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost journal[237931]: ethtool ioctl error on tap7afb25a0-4d: No such device Oct 5 06:05:58 localhost nova_compute[297021]: 2025-10-05 10:05:58.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:58 localhost nova_compute[297021]: 2025-10-05 10:05:58.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:59 localhost podman[330648]: Oct 5 06:05:59 localhost podman[330648]: 2025-10-05 10:05:59.503280479 +0000 UTC m=+0.085390119 container create c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efa5b15e-722c-4d87-a918-18fadce0c12e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:59 localhost systemd[1]: Started libpod-conmon-c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130.scope. Oct 5 06:05:59 localhost systemd[1]: Started libcrun container. Oct 5 06:05:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5848c7fe8c9edf546f4d38c20f18c5004fcf12a01c6ac053f961dd4e6bebe77e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:59 localhost podman[330648]: 2025-10-05 10:05:59.465142524 +0000 UTC m=+0.047252264 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:59 localhost nova_compute[297021]: 2025-10-05 10:05:59.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:59 localhost podman[330648]: 2025-10-05 10:05:59.616255007 +0000 UTC m=+0.198364667 container init c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efa5b15e-722c-4d87-a918-18fadce0c12e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:59 localhost podman[330648]: 2025-10-05 10:05:59.624357387 +0000 UTC m=+0.206467037 container start c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efa5b15e-722c-4d87-a918-18fadce0c12e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:59 localhost dnsmasq[330667]: started, version 2.85 cachesize 150 Oct 5 06:05:59 localhost dnsmasq[330667]: DNS service limited to local subnets Oct 5 06:05:59 localhost dnsmasq[330667]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:59 localhost dnsmasq[330667]: warning: no upstream servers configured Oct 5 06:05:59 localhost dnsmasq-dhcp[330667]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:59 localhost dnsmasq[330667]: read /var/lib/neutron/dhcp/efa5b15e-722c-4d87-a918-18fadce0c12e/addn_hosts - 0 addresses Oct 5 06:05:59 localhost dnsmasq-dhcp[330667]: read /var/lib/neutron/dhcp/efa5b15e-722c-4d87-a918-18fadce0c12e/host Oct 5 06:05:59 localhost dnsmasq-dhcp[330667]: read /var/lib/neutron/dhcp/efa5b15e-722c-4d87-a918-18fadce0c12e/opts Oct 5 06:05:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:05:59.708 272040 INFO neutron.agent.dhcp.agent [None req-c7c2608f-5403-44d6-b1c4-3fa5a59f04dd - - - - - -] DHCP configuration for ports {'d2c31e9f-76d1-457c-9247-280915e6d306'} is completed#033[00m Oct 5 06:05:59 localhost podman[330685]: 2025-10-05 10:05:59.930780719 +0000 UTC m=+0.066002493 container kill c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efa5b15e-722c-4d87-a918-18fadce0c12e, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:05:59 localhost dnsmasq[330667]: exiting on receipt of SIGTERM Oct 5 06:05:59 localhost systemd[1]: libpod-c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130.scope: Deactivated successfully. Oct 5 06:05:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:05:59 localhost ovn_controller[157794]: 2025-10-05T10:05:59Z|00252|binding|INFO|Removing iface tap7afb25a0-4d ovn-installed in OVS Oct 5 06:05:59 localhost ovn_controller[157794]: 2025-10-05T10:05:59Z|00253|binding|INFO|Removing lport 7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3 ovn-installed in OVS Oct 5 06:05:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:59.961 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ceab9aac-55ba-4fdd-9f9c-8addaa19a479 with type ""#033[00m Oct 5 06:05:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:59.964 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-efa5b15e-722c-4d87-a918-18fadce0c12e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa5b15e-722c-4d87-a918-18fadce0c12e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e1138b1-6a8e-40d9-98e5-b8580ac8c37e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:59 localhost nova_compute[297021]: 2025-10-05 10:05:59.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:59.967 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 7afb25a0-4d6c-4d53-b0a6-e6228c6e96d3 in datapath efa5b15e-722c-4d87-a918-18fadce0c12e unbound from our chassis#033[00m Oct 5 06:05:59 localhost nova_compute[297021]: 2025-10-05 10:05:59.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:59.970 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network efa5b15e-722c-4d87-a918-18fadce0c12e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:59 localhost ovn_metadata_agent[163429]: 2025-10-05 10:05:59.971 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[10cde221-6949-4b67-bf1f-56791da8ac4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:00 localhost podman[330700]: 2025-10-05 10:06:00.026091028 +0000 UTC m=+0.067956786 container died c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efa5b15e-722c-4d87-a918-18fadce0c12e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:00 localhost podman[330700]: 2025-10-05 10:06:00.069717102 +0000 UTC m=+0.111582810 container remove c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-efa5b15e-722c-4d87-a918-18fadce0c12e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:06:00 localhost kernel: device tap7afb25a0-4d left promiscuous mode Oct 5 06:06:00 localhost nova_compute[297021]: 2025-10-05 10:06:00.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:00 localhost nova_compute[297021]: 2025-10-05 10:06:00.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:00 localhost podman[330701]: 2025-10-05 10:06:00.11163335 +0000 UTC m=+0.147599128 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:00.117 272040 INFO neutron.agent.dhcp.agent [None req-33a44942-50e5-487e-a4c2-68a8fb083135 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:00.118 272040 INFO neutron.agent.dhcp.agent [None req-33a44942-50e5-487e-a4c2-68a8fb083135 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:00 localhost podman[330701]: 2025-10-05 10:06:00.11969859 +0000 UTC m=+0.155664388 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:00 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:06:00 localhost systemd[1]: libpod-conmon-c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130.scope: Deactivated successfully. Oct 5 06:06:00 localhost ovn_controller[157794]: 2025-10-05T10:06:00Z|00254|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:00 localhost nova_compute[297021]: 2025-10-05 10:06:00.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:00 localhost systemd[1]: var-lib-containers-storage-overlay-5848c7fe8c9edf546f4d38c20f18c5004fcf12a01c6ac053f961dd4e6bebe77e-merged.mount: Deactivated successfully. Oct 5 06:06:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c60ec1e165001feccf00204471bcd5e9e56c06b01c00dc8da25c52ca98fe6130-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:00 localhost systemd[1]: run-netns-qdhcp\x2defa5b15e\x2d722c\x2d4d87\x2da918\x2d18fadce0c12e.mount: Deactivated successfully. Oct 5 06:06:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:03 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:03.145 2 INFO neutron.agent.securitygroups_rpc [None req-0a4a9087-fb9d-46a4-b94c-813813807afd fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:06:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e124 do_prune osdmap full prune enabled Oct 5 06:06:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e125 e125: 6 total, 6 up, 6 in Oct 5 06:06:03 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e125: 6 total, 6 up, 6 in Oct 5 06:06:03 localhost podman[330744]: 2025-10-05 10:06:03.679378934 +0000 UTC m=+0.085601227 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:06:03 localhost podman[330744]: 2025-10-05 10:06:03.71606635 +0000 UTC m=+0.122288613 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:06:03 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:06:04 localhost nova_compute[297021]: 2025-10-05 10:06:04.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:04 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:04.984 2 INFO neutron.agent.securitygroups_rpc [None req-6ec6a26e-e61d-4275-bc99-fc7b5b675d25 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:06:05 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:05.042 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e125 do_prune osdmap full prune enabled Oct 5 06:06:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e126 e126: 6 total, 6 up, 6 in Oct 5 06:06:05 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e126: 6 total, 6 up, 6 in Oct 5 06:06:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:05.811 2 INFO neutron.agent.securitygroups_rpc [None req-6dc94aa3-8b4b-4c55-895e-1ee1eef07ff5 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:06 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:06.007 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:06:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:06 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:06.629 272040 INFO neutron.agent.linux.ip_lib [None req-aca6acf9-fc3a-4e8f-9b85-f7a277d44295 - - - - - -] Device tapab5d2ba0-ac cannot be used as it has no MAC address#033[00m Oct 5 06:06:06 localhost systemd[1]: tmp-crun.k3FqEA.mount: Deactivated successfully. Oct 5 06:06:06 localhost nova_compute[297021]: 2025-10-05 10:06:06.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:06 localhost podman[330772]: 2025-10-05 10:06:06.658754676 +0000 UTC m=+0.106107572 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:06:06 localhost kernel: device tapab5d2ba0-ac entered promiscuous mode Oct 5 06:06:06 localhost nova_compute[297021]: 2025-10-05 10:06:06.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:06 localhost NetworkManager[5981]: [1759658766.6646] manager: (tapab5d2ba0-ac): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Oct 5 06:06:06 localhost ovn_controller[157794]: 2025-10-05T10:06:06Z|00255|binding|INFO|Claiming lport ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f for this chassis. Oct 5 06:06:06 localhost ovn_controller[157794]: 2025-10-05T10:06:06Z|00256|binding|INFO|ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f: Claiming unknown Oct 5 06:06:06 localhost systemd-udevd[330798]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:06 localhost podman[330772]: 2025-10-05 10:06:06.677939147 +0000 UTC m=+0.125292073 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:06.678 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-03879930-5f5d-4a43-9ad6-d4af4e85c2a1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03879930-5f5d-4a43-9ad6-d4af4e85c2a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb53c0d3-a47e-4537-b7cb-020fa863c0bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:06.680 163434 INFO neutron.agent.ovn.metadata.agent [-] Port ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f in datapath 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 bound to our chassis#033[00m Oct 5 06:06:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:06.684 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:06.685 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3a179eb0-14e8-4509-9c1c-b24ec5ec6eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost ovn_controller[157794]: 2025-10-05T10:06:06Z|00257|binding|INFO|Setting lport ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f ovn-installed in OVS Oct 5 06:06:06 localhost ovn_controller[157794]: 2025-10-05T10:06:06Z|00258|binding|INFO|Setting lport ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f up in Southbound Oct 5 06:06:06 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:06:06 localhost nova_compute[297021]: 2025-10-05 10:06:06.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e126 do_prune osdmap full prune enabled Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e127 e127: 6 total, 6 up, 6 in Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e127: 6 total, 6 up, 6 in Oct 5 06:06:06 localhost journal[237931]: ethtool ioctl error on tapab5d2ba0-ac: No such device Oct 5 06:06:06 localhost nova_compute[297021]: 2025-10-05 10:06:06.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:06 localhost nova_compute[297021]: 2025-10-05 10:06:06.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:07.190 2 INFO neutron.agent.securitygroups_rpc [None req-40270d90-3d16-4270-bc86-50cb382d9d08 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:06:07 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:07.201 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:07 localhost podman[330870]: Oct 5 06:06:07 localhost podman[330870]: 2025-10-05 10:06:07.621870272 +0000 UTC m=+0.093763327 container create 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:07 localhost systemd[1]: Started libpod-conmon-3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca.scope. Oct 5 06:06:07 localhost podman[330870]: 2025-10-05 10:06:07.57984362 +0000 UTC m=+0.051736705 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:07 localhost systemd[1]: Started libcrun container. Oct 5 06:06:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9db03d08dcb4c307d3996083da2b9f71faee17833066697ebf8e054afc340b46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:07 localhost podman[330870]: 2025-10-05 10:06:07.694900565 +0000 UTC m=+0.166793620 container init 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:06:07 localhost podman[330870]: 2025-10-05 10:06:07.70461621 +0000 UTC m=+0.176509275 container start 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:06:07 localhost dnsmasq[330888]: started, version 2.85 cachesize 150 Oct 5 06:06:07 localhost dnsmasq[330888]: DNS service limited to local subnets Oct 5 06:06:07 localhost dnsmasq[330888]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:07 localhost dnsmasq[330888]: warning: no upstream servers configured Oct 5 06:06:07 localhost dnsmasq-dhcp[330888]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:07 localhost dnsmasq[330888]: read /var/lib/neutron/dhcp/03879930-5f5d-4a43-9ad6-d4af4e85c2a1/addn_hosts - 0 addresses Oct 5 06:06:07 localhost dnsmasq-dhcp[330888]: read /var/lib/neutron/dhcp/03879930-5f5d-4a43-9ad6-d4af4e85c2a1/host Oct 5 06:06:07 localhost dnsmasq-dhcp[330888]: read /var/lib/neutron/dhcp/03879930-5f5d-4a43-9ad6-d4af4e85c2a1/opts Oct 5 06:06:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e127 do_prune osdmap full prune enabled Oct 5 06:06:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e128 e128: 6 total, 6 up, 6 in Oct 5 06:06:07 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e128: 6 total, 6 up, 6 in Oct 5 06:06:07 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:07.897 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:07 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:07.910 272040 INFO neutron.agent.dhcp.agent [None req-03bd9a43-8937-48e3-aa61-fe64729905d6 - - - - - -] DHCP configuration for ports {'c4677245-0b6f-4b1e-8ac6-dd5ae01907a9'} is completed#033[00m Oct 5 06:06:08 localhost dnsmasq[330888]: read /var/lib/neutron/dhcp/03879930-5f5d-4a43-9ad6-d4af4e85c2a1/addn_hosts - 0 addresses Oct 5 06:06:08 localhost dnsmasq-dhcp[330888]: read /var/lib/neutron/dhcp/03879930-5f5d-4a43-9ad6-d4af4e85c2a1/host Oct 5 06:06:08 localhost dnsmasq-dhcp[330888]: read /var/lib/neutron/dhcp/03879930-5f5d-4a43-9ad6-d4af4e85c2a1/opts Oct 5 06:06:08 localhost podman[330906]: 2025-10-05 10:06:08.06547817 +0000 UTC m=+0.058381847 container kill 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:06:08 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:08.431 272040 INFO neutron.agent.dhcp.agent [None req-c739a52a-b218-4ce4-ae1c-eaa46decbeae - - - - - -] DHCP configuration for ports {'ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f', 'c4677245-0b6f-4b1e-8ac6-dd5ae01907a9'} is completed#033[00m Oct 5 06:06:09 localhost ovn_controller[157794]: 2025-10-05T10:06:09Z|00259|binding|INFO|Removing iface tapab5d2ba0-ac ovn-installed in OVS Oct 5 06:06:09 localhost ovn_controller[157794]: 2025-10-05T10:06:09Z|00260|binding|INFO|Removing lport ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f ovn-installed in OVS Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.556 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 3c56ff2e-8c28-4d32-9778-ca7dd1b12b3e with type ""#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.558 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-03879930-5f5d-4a43-9ad6-d4af4e85c2a1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03879930-5f5d-4a43-9ad6-d4af4e85c2a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb53c0d3-a47e-4537-b7cb-020fa863c0bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.560 163434 INFO neutron.agent.ovn.metadata.agent [-] Port ab5d2ba0-acb2-441a-a0da-8d5c837a0a1f in datapath 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 unbound from our chassis#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.562 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03879930-5f5d-4a43-9ad6-d4af4e85c2a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.589 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b101b68c-f9bd-42d0-94be-d66a6069e7ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost podman[330939]: 2025-10-05 10:06:09.699331281 +0000 UTC m=+0.087164328 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350) Oct 5 06:06:09 localhost systemd[1]: tmp-crun.ABiM6F.mount: Deactivated successfully. Oct 5 06:06:09 localhost podman[330956]: 2025-10-05 10:06:09.738227487 +0000 UTC m=+0.084717652 container kill 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:06:09 localhost dnsmasq[330888]: exiting on receipt of SIGTERM Oct 5 06:06:09 localhost systemd[1]: libpod-3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca.scope: Deactivated successfully. Oct 5 06:06:09 localhost podman[330939]: 2025-10-05 10:06:09.763593216 +0000 UTC m=+0.151426263 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 06:06:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e128 do_prune osdmap full prune enabled Oct 5 06:06:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e129 e129: 6 total, 6 up, 6 in Oct 5 06:06:09 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:06:09 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e129: 6 total, 6 up, 6 in Oct 5 06:06:09 localhost podman[330978]: 2025-10-05 10:06:09.80830202 +0000 UTC m=+0.051055188 container died 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:09 localhost podman[330978]: 2025-10-05 10:06:09.8480499 +0000 UTC m=+0.090803048 container remove 3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03879930-5f5d-4a43-9ad6-d4af4e85c2a1, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:06:09 localhost kernel: device tapab5d2ba0-ac left promiscuous mode Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.869 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost systemd[1]: libpod-conmon-3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca.scope: Deactivated successfully. Oct 5 06:06:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:09.873 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:06:09 localhost nova_compute[297021]: 2025-10-05 10:06:09.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:09.910 272040 INFO neutron.agent.dhcp.agent [None req-ee108210-465c-4318-bfe8-f64120430308 - - - - - -] Synchronizing state#033[00m Oct 5 06:06:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:10.168 272040 INFO neutron.agent.dhcp.agent [None req-9e39f816-3380-446a-8763-e96889834bc2 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:06:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:10.171 272040 INFO neutron.agent.dhcp.agent [-] Starting network 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 dhcp configuration#033[00m Oct 5 06:06:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:10.175 272040 INFO neutron.agent.dhcp.agent [-] Starting network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:06:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:10.175 272040 INFO neutron.agent.dhcp.agent [-] Finished network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:06:10 localhost nova_compute[297021]: 2025-10-05 10:06:10.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:10 localhost systemd[1]: var-lib-containers-storage-overlay-9db03d08dcb4c307d3996083da2b9f71faee17833066697ebf8e054afc340b46-merged.mount: Deactivated successfully. Oct 5 06:06:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b469008fd19f36cf5731e993bbf820e470b92529837c0aeed27e561cc7175ca-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:10 localhost systemd[1]: run-netns-qdhcp\x2d03879930\x2d5f5d\x2d4a43\x2d9ad6\x2dd4af4e85c2a1.mount: Deactivated successfully. Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.383 272040 INFO neutron.agent.dhcp.agent [None req-a971ab7a-9380-499f-8e21-813c4b713aa9 - - - - - -] Finished network 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 dhcp configuration#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.384 272040 INFO neutron.agent.dhcp.agent [None req-9e39f816-3380-446a-8763-e96889834bc2 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.385 272040 INFO neutron.agent.dhcp.agent [None req-9e39f816-3380-446a-8763-e96889834bc2 - - - - - -] Synchronizing state#033[00m Oct 5 06:06:11 localhost nova_compute[297021]: 2025-10-05 10:06:11.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:11 localhost nova_compute[297021]: 2025-10-05 10:06:11.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.569 272040 INFO neutron.agent.dhcp.agent [None req-6fecc327-b7f5-4279-b9f5-45081976bf4a - - - - - -] DHCP configuration for ports {'c4677245-0b6f-4b1e-8ac6-dd5ae01907a9'} is completed#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.604 272040 INFO neutron.agent.dhcp.agent [None req-afe8259e-1920-4408-b0e3-45af17cb225f - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.605 272040 INFO neutron.agent.dhcp.agent [-] Starting network 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 dhcp configuration#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.606 272040 INFO neutron.agent.dhcp.agent [-] Finished network 03879930-5f5d-4a43-9ad6-d4af4e85c2a1 dhcp configuration#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.606 272040 INFO neutron.agent.dhcp.agent [-] Starting network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.607 272040 INFO neutron.agent.dhcp.agent [-] Finished network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.608 272040 INFO neutron.agent.dhcp.agent [None req-afe8259e-1920-4408-b0e3-45af17cb225f - - - - - -] Synchronizing state complete#033[00m Oct 5 06:06:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e129 do_prune osdmap full prune enabled Oct 5 06:06:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e130 e130: 6 total, 6 up, 6 in Oct 5 06:06:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e130: 6 total, 6 up, 6 in Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.701 272040 INFO neutron.agent.dhcp.agent [None req-d45a4e92-8f71-45f3-8d89-40f4a141b12d - - - - - -] DHCP configuration for ports {'c4677245-0b6f-4b1e-8ac6-dd5ae01907a9'} is completed#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:11.903 272040 INFO neutron.agent.dhcp.agent [None req-5e50803a-2bb5-4fec-9d4e-27f949cb7143 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:12.340 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:12 localhost nova_compute[297021]: 2025-10-05 10:06:12.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:13 localhost ovn_controller[157794]: 2025-10-05T10:06:13Z|00261|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:13 localhost nova_compute[297021]: 2025-10-05 10:06:13.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:06:13 localhost podman[331005]: 2025-10-05 10:06:13.66577324 +0000 UTC m=+0.077175258 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:06:13 localhost podman[331005]: 2025-10-05 10:06:13.702902338 +0000 UTC m=+0.114304356 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:06:13 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:06:13 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:13.842 2 INFO neutron.agent.securitygroups_rpc [None req-a1540dd6-4210-49ff-b017-69a28b18671d fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:13.874 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:06:13 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:13.874 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:14 localhost nova_compute[297021]: 2025-10-05 10:06:14.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:14.713 2 INFO neutron.agent.securitygroups_rpc [None req-c02d47ac-3ccc-4c3b-9efe-38cf6a9d2160 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:14 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:14.744 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:14 localhost nova_compute[297021]: 2025-10-05 10:06:14.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:14.919 2 INFO neutron.agent.securitygroups_rpc [None req-f53d150f-ff34-4d45-b035-cd4bf1baabd9 a4fad2a194fa4a66911e27f722075fa7 27e03170fdbf44268868a90d25e4e944 - - default default] Security group member updated ['14f00663-08a7-497a-b752-895d5ab0d915']#033[00m Oct 5 06:06:15 localhost nova_compute[297021]: 2025-10-05 10:06:15.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e130 do_prune osdmap full prune enabled Oct 5 06:06:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e131 e131: 6 total, 6 up, 6 in Oct 5 06:06:15 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e131: 6 total, 6 up, 6 in Oct 5 06:06:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e131 do_prune osdmap full prune enabled Oct 5 06:06:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e132 e132: 6 total, 6 up, 6 in Oct 5 06:06:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e132: 6 total, 6 up, 6 in Oct 5 06:06:16 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:16.966 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:17 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:17.735 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e132 do_prune osdmap full prune enabled Oct 5 06:06:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e133 e133: 6 total, 6 up, 6 in Oct 5 06:06:17 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e133: 6 total, 6 up, 6 in Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:06:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e133 do_prune osdmap full prune enabled Oct 5 06:06:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e134 e134: 6 total, 6 up, 6 in Oct 5 06:06:18 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e134: 6 total, 6 up, 6 in Oct 5 06:06:18 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:18.885 272040 INFO neutron.agent.linux.ip_lib [None req-1bce272d-09cb-4b82-912c-8eacfb76361a - - - - - -] Device tap54c5a879-ac cannot be used as it has no MAC address#033[00m Oct 5 06:06:18 localhost systemd[1]: tmp-crun.yk5Ntv.mount: Deactivated successfully. Oct 5 06:06:18 localhost podman[331031]: 2025-10-05 10:06:18.912080066 +0000 UTC m=+0.098318151 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:18 localhost kernel: device tap54c5a879-ac entered promiscuous mode Oct 5 06:06:18 localhost ovn_controller[157794]: 2025-10-05T10:06:18Z|00262|binding|INFO|Claiming lport 54c5a879-ac82-44f2-887a-fd694667f3c0 for this chassis. Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:18 localhost ovn_controller[157794]: 2025-10-05T10:06:18Z|00263|binding|INFO|54c5a879-ac82-44f2-887a-fd694667f3c0: Claiming unknown Oct 5 06:06:18 localhost NetworkManager[5981]: [1759658778.9333] manager: (tap54c5a879-ac): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Oct 5 06:06:18 localhost systemd-udevd[331060]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:18 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:18.944 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-aa71e5bf-77bd-4316-8de6-9ed8702d644d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa71e5bf-77bd-4316-8de6-9ed8702d644d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53967414-aa48-4af9-8b1a-6ce93b5d8977, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=54c5a879-ac82-44f2-887a-fd694667f3c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:18 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:18.946 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 54c5a879-ac82-44f2-887a-fd694667f3c0 in datapath aa71e5bf-77bd-4316-8de6-9ed8702d644d bound to our chassis#033[00m Oct 5 06:06:18 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:18.948 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network aa71e5bf-77bd-4316-8de6-9ed8702d644d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:18 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:18.949 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[252f3adc-281e-4389-beba-08d42ec0b1f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:18 localhost podman[331031]: 2025-10-05 10:06:18.952014311 +0000 UTC m=+0.138252376 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:06:18 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:18 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:06:18 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:18 localhost ovn_controller[157794]: 2025-10-05T10:06:18Z|00264|binding|INFO|Setting lport 54c5a879-ac82-44f2-887a-fd694667f3c0 ovn-installed in OVS Oct 5 06:06:18 localhost ovn_controller[157794]: 2025-10-05T10:06:18Z|00265|binding|INFO|Setting lport 54c5a879-ac82-44f2-887a-fd694667f3c0 up in Southbound Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:18 localhost nova_compute[297021]: 2025-10-05 10:06:18.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:18 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:18 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:18 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:18 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:19 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:19 localhost journal[237931]: ethtool ioctl error on tap54c5a879-ac: No such device Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:06:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2894921636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:06:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:06:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2894921636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:19.443 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b0dc9034-4cc3-4542-ac99-b70749becaa5 with type ""#033[00m Oct 5 06:06:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:19.444 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-aa71e5bf-77bd-4316-8de6-9ed8702d644d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa71e5bf-77bd-4316-8de6-9ed8702d644d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53967414-aa48-4af9-8b1a-6ce93b5d8977, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=54c5a879-ac82-44f2-887a-fd694667f3c0) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:19.446 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 54c5a879-ac82-44f2-887a-fd694667f3c0 in datapath aa71e5bf-77bd-4316-8de6-9ed8702d644d unbound from our chassis#033[00m Oct 5 06:06:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:19.447 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network aa71e5bf-77bd-4316-8de6-9ed8702d644d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:19 localhost ovn_controller[157794]: 2025-10-05T10:06:19Z|00266|binding|INFO|Removing iface tap54c5a879-ac ovn-installed in OVS Oct 5 06:06:19 localhost ovn_controller[157794]: 2025-10-05T10:06:19Z|00267|binding|INFO|Removing lport 54c5a879-ac82-44f2-887a-fd694667f3c0 ovn-installed in OVS Oct 5 06:06:19 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:19.449 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3011a51a-1cff-4b32-a00b-872c5c2693a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.451 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.451 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.451 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.452 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.452 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:19 localhost podman[331153]: Oct 5 06:06:19 localhost podman[331153]: 2025-10-05 10:06:19.843888852 +0000 UTC m=+0.096991626 container create e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:06:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2776555206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:06:19 localhost systemd[1]: Started libpod-conmon-e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb.scope. Oct 5 06:06:19 localhost podman[331153]: 2025-10-05 10:06:19.799062334 +0000 UTC m=+0.052165138 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.900 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:06:19 localhost systemd[1]: Started libcrun container. Oct 5 06:06:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66df7e3e19f740c045e3fe96c623a1120557c760fb7c21844be5c7afa6b599e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:19 localhost podman[331153]: 2025-10-05 10:06:19.918974861 +0000 UTC m=+0.172077625 container init e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:19 localhost podman[331153]: 2025-10-05 10:06:19.926630079 +0000 UTC m=+0.179732853 container start e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:19 localhost dnsmasq[331173]: started, version 2.85 cachesize 150 Oct 5 06:06:19 localhost dnsmasq[331173]: DNS service limited to local subnets Oct 5 06:06:19 localhost dnsmasq[331173]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:19 localhost dnsmasq[331173]: warning: no upstream servers configured Oct 5 06:06:19 localhost dnsmasq-dhcp[331173]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:19 localhost dnsmasq[331173]: read /var/lib/neutron/dhcp/aa71e5bf-77bd-4316-8de6-9ed8702d644d/addn_hosts - 0 addresses Oct 5 06:06:19 localhost dnsmasq-dhcp[331173]: read /var/lib/neutron/dhcp/aa71e5bf-77bd-4316-8de6-9ed8702d644d/host Oct 5 06:06:19 localhost dnsmasq-dhcp[331173]: read /var/lib/neutron/dhcp/aa71e5bf-77bd-4316-8de6-9ed8702d644d/opts Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.962 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:06:19 localhost nova_compute[297021]: 2025-10-05 10:06:19.963 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:20 localhost kernel: device tap54c5a879-ac left promiscuous mode Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.109 272040 INFO neutron.agent.dhcp.agent [None req-fe0584b4-b9c3-47e6-b58a-c9f7b4cc84a7 - - - - - -] DHCP configuration for ports {'b8dc302e-8af6-4e13-93e5-106fd45e628f'} is completed#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.194 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.196 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11160MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.196 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.197 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:06:20 localhost dnsmasq[331173]: read /var/lib/neutron/dhcp/aa71e5bf-77bd-4316-8de6-9ed8702d644d/addn_hosts - 0 addresses Oct 5 06:06:20 localhost dnsmasq-dhcp[331173]: read /var/lib/neutron/dhcp/aa71e5bf-77bd-4316-8de6-9ed8702d644d/host Oct 5 06:06:20 localhost dnsmasq-dhcp[331173]: read /var/lib/neutron/dhcp/aa71e5bf-77bd-4316-8de6-9ed8702d644d/opts Oct 5 06:06:20 localhost podman[331193]: 2025-10-05 10:06:20.258008398 +0000 UTC m=+0.061304696 container kill e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.275 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.275 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.276 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent [None req-1bce272d-09cb-4b82-912c-8eacfb76361a - - - - - -] Unable to reload_allocations dhcp for aa71e5bf-77bd-4316-8de6-9ed8702d644d.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap54c5a879-ac not found in namespace qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d. Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent return fut.result() Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent raise self._exception Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap54c5a879-ac not found in namespace qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d. Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.280 272040 ERROR neutron.agent.dhcp.agent #033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.286 272040 INFO neutron.agent.dhcp.agent [None req-afe8259e-1920-4408-b0e3-45af17cb225f - - - - - -] Synchronizing state#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.315 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.455 272040 INFO neutron.agent.dhcp.agent [None req-07c1ae0e-38ae-4e8e-b3ba-75b8002a0663 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.456 272040 INFO neutron.agent.dhcp.agent [-] Starting network aa71e5bf-77bd-4316-8de6-9ed8702d644d dhcp configuration#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.456 272040 INFO neutron.agent.dhcp.agent [-] Finished network aa71e5bf-77bd-4316-8de6-9ed8702d644d dhcp configuration#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.457 272040 INFO neutron.agent.dhcp.agent [-] Starting network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.457 272040 INFO neutron.agent.dhcp.agent [-] Finished network d24358df-730a-4311-aeb2-20243a504d81 dhcp configuration#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.458 272040 INFO neutron.agent.dhcp.agent [None req-07c1ae0e-38ae-4e8e-b3ba-75b8002a0663 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:06:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:20.468 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:06:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:20.469 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:06:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:20.469 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:06:20 localhost dnsmasq[331173]: exiting on receipt of SIGTERM Oct 5 06:06:20 localhost podman[331246]: 2025-10-05 10:06:20.77797516 +0000 UTC m=+0.062787037 container kill e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:20 localhost systemd[1]: libpod-e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb.scope: Deactivated successfully. Oct 5 06:06:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:06:20 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/818396030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.835 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:06:20 localhost podman[331259]: 2025-10-05 10:06:20.845806242 +0000 UTC m=+0.052674222 container died e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.847 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:06:20 localhost systemd[1]: tmp-crun.Yl9Enb.mount: Deactivated successfully. Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.866 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:06:20 localhost podman[331259]: 2025-10-05 10:06:20.885012416 +0000 UTC m=+0.091880356 container cleanup e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:06:20 localhost systemd[1]: libpod-conmon-e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb.scope: Deactivated successfully. Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.895 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.896 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:06:20 localhost ovn_controller[157794]: 2025-10-05T10:06:20Z|00268|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:20 localhost podman[331261]: 2025-10-05 10:06:20.92605532 +0000 UTC m=+0.127445011 container remove e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aa71e5bf-77bd-4316-8de6-9ed8702d644d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:06:20 localhost nova_compute[297021]: 2025-10-05 10:06:20.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:20.998 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:21 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:21.388 2 INFO neutron.agent.securitygroups_rpc [None req-ac8f0eac-2a70-457c-8a61-6ee9d997209b b817219f01e3454e8694e283e92fc44c ea1a94bd61a440f3957671694183ce08 - - default default] Security group member updated ['d6c25099-34f5-417b-b95b-a4264a8e3587']#033[00m Oct 5 06:06:21 localhost podman[248506]: time="2025-10-05T10:06:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:06:21 localhost podman[248506]: @ - - [05/Oct/2025:10:06:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147495 "" "Go-http-client/1.1" Oct 5 06:06:21 localhost podman[248506]: @ - - [05/Oct/2025:10:06:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19854 "" "Go-http-client/1.1" Oct 5 06:06:21 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:21.572 2 INFO neutron.agent.securitygroups_rpc [None req-4cb6cc17-e2d9-4ef7-b7c8-bf598d55a89a a4fad2a194fa4a66911e27f722075fa7 27e03170fdbf44268868a90d25e4e944 - - default default] Security group member updated ['14f00663-08a7-497a-b752-895d5ab0d915']#033[00m Oct 5 06:06:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:21 localhost systemd[1]: var-lib-containers-storage-overlay-66df7e3e19f740c045e3fe96c623a1120557c760fb7c21844be5c7afa6b599e6-merged.mount: Deactivated successfully. Oct 5 06:06:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e31f77f39b26b235519dab86dfb2821cf5cd1e325c44c08a25110f77161659bb-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:21 localhost systemd[1]: run-netns-qdhcp\x2daa71e5bf\x2d77bd\x2d4316\x2d8de6\x2d9ed8702d644d.mount: Deactivated successfully. Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.897 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.898 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.898 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.986 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.987 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.988 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:06:21 localhost nova_compute[297021]: 2025-10-05 10:06:21.988 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:06:22 localhost openstack_network_exporter[250601]: ERROR 10:06:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:22 localhost openstack_network_exporter[250601]: ERROR 10:06:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:22 localhost openstack_network_exporter[250601]: ERROR 10:06:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:06:22 localhost openstack_network_exporter[250601]: ERROR 10:06:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:06:22 localhost openstack_network_exporter[250601]: Oct 5 06:06:22 localhost openstack_network_exporter[250601]: ERROR 10:06:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:06:22 localhost openstack_network_exporter[250601]: Oct 5 06:06:23 localhost nova_compute[297021]: 2025-10-05 10:06:23.003 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:06:23 localhost nova_compute[297021]: 2025-10-05 10:06:23.039 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:06:23 localhost nova_compute[297021]: 2025-10-05 10:06:23.040 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:06:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:06:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:06:23 localhost podman[331292]: 2025-10-05 10:06:23.685813269 +0000 UTC m=+0.087552969 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:06:23 localhost podman[331292]: 2025-10-05 10:06:23.694977598 +0000 UTC m=+0.096717318 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:23 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:06:23 localhost podman[331291]: 2025-10-05 10:06:23.747558096 +0000 UTC m=+0.152992066 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 06:06:23 localhost podman[331291]: 2025-10-05 10:06:23.758808582 +0000 UTC m=+0.164242562 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true) Oct 5 06:06:23 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:06:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:24.144 2 INFO neutron.agent.securitygroups_rpc [None req-42adb9bd-1c44-4c47-bb81-feb4a013376d b817219f01e3454e8694e283e92fc44c ea1a94bd61a440f3957671694183ce08 - - default default] Security group member updated ['d6c25099-34f5-417b-b95b-a4264a8e3587']#033[00m Oct 5 06:06:24 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:24.205 272040 INFO neutron.agent.linux.ip_lib [None req-d77fcec9-f08d-48a6-b39e-e0e063687240 - - - - - -] Device tapa810c5ec-1c cannot be used as it has no MAC address#033[00m Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:24 localhost kernel: device tapa810c5ec-1c entered promiscuous mode Oct 5 06:06:24 localhost ovn_controller[157794]: 2025-10-05T10:06:24Z|00269|binding|INFO|Claiming lport a810c5ec-1ce9-4b80-97f3-d6906bc7a96b for this chassis. Oct 5 06:06:24 localhost ovn_controller[157794]: 2025-10-05T10:06:24Z|00270|binding|INFO|a810c5ec-1ce9-4b80-97f3-d6906bc7a96b: Claiming unknown Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:24 localhost NetworkManager[5981]: [1759658784.2405] manager: (tapa810c5ec-1c): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Oct 5 06:06:24 localhost systemd-udevd[331338]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:24.246 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-0c19f064-847f-4189-9d79-0a336cef79ee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c19f064-847f-4189-9d79-0a336cef79ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c41b965-d946-4362-aad9-d37b6b35a905, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a810c5ec-1ce9-4b80-97f3-d6906bc7a96b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:24.248 163434 INFO neutron.agent.ovn.metadata.agent [-] Port a810c5ec-1ce9-4b80-97f3-d6906bc7a96b in datapath 0c19f064-847f-4189-9d79-0a336cef79ee bound to our chassis#033[00m Oct 5 06:06:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:24.249 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0c19f064-847f-4189-9d79-0a336cef79ee or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:24.251 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[441c2f0f-3df5-4ea9-b325-42236c9ae47f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost ovn_controller[157794]: 2025-10-05T10:06:24Z|00271|binding|INFO|Setting lport a810c5ec-1ce9-4b80-97f3-d6906bc7a96b ovn-installed in OVS Oct 5 06:06:24 localhost ovn_controller[157794]: 2025-10-05T10:06:24Z|00272|binding|INFO|Setting lport a810c5ec-1ce9-4b80-97f3-d6906bc7a96b up in Southbound Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost journal[237931]: ethtool ioctl error on tapa810c5ec-1c: No such device Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e134 do_prune osdmap full prune enabled Oct 5 06:06:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e135 e135: 6 total, 6 up, 6 in Oct 5 06:06:24 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e135: 6 total, 6 up, 6 in Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:24 localhost nova_compute[297021]: 2025-10-05 10:06:24.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:25 localhost ovn_controller[157794]: 2025-10-05T10:06:25Z|00273|binding|INFO|Removing iface tapa810c5ec-1c ovn-installed in OVS Oct 5 06:06:25 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:25.041 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port c773bdb3-7f11-45c8-b9af-24853ad4228b with type ""#033[00m Oct 5 06:06:25 localhost ovn_controller[157794]: 2025-10-05T10:06:25Z|00274|binding|INFO|Removing lport a810c5ec-1ce9-4b80-97f3-d6906bc7a96b ovn-installed in OVS Oct 5 06:06:25 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:25.043 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-0c19f064-847f-4189-9d79-0a336cef79ee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c19f064-847f-4189-9d79-0a336cef79ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c41b965-d946-4362-aad9-d37b6b35a905, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a810c5ec-1ce9-4b80-97f3-d6906bc7a96b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:25 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:25.046 163434 INFO neutron.agent.ovn.metadata.agent [-] Port a810c5ec-1ce9-4b80-97f3-d6906bc7a96b in datapath 0c19f064-847f-4189-9d79-0a336cef79ee unbound from our chassis#033[00m Oct 5 06:06:25 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:25.048 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c19f064-847f-4189-9d79-0a336cef79ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:25 localhost nova_compute[297021]: 2025-10-05 10:06:25.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:25 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:25.049 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3a759825-3558-489a-90d7-281d8de4e4bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:25 localhost nova_compute[297021]: 2025-10-05 10:06:25.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:25 localhost podman[331409]: Oct 5 06:06:25 localhost podman[331409]: 2025-10-05 10:06:25.143426394 +0000 UTC m=+0.073682082 container create 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:06:25 localhost systemd[1]: Started libpod-conmon-55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733.scope. Oct 5 06:06:25 localhost systemd[1]: Started libcrun container. Oct 5 06:06:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db37055525690b420aee0e85f4cf20db83234b1802f4e0faa5931bfe70ca2ce9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:25 localhost podman[331409]: 2025-10-05 10:06:25.119602617 +0000 UTC m=+0.049858285 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:25 localhost podman[331409]: 2025-10-05 10:06:25.228985167 +0000 UTC m=+0.159240905 container init 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:25 localhost podman[331409]: 2025-10-05 10:06:25.243571324 +0000 UTC m=+0.173826992 container start 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:06:25 localhost dnsmasq[331427]: started, version 2.85 cachesize 150 Oct 5 06:06:25 localhost dnsmasq[331427]: DNS service limited to local subnets Oct 5 06:06:25 localhost dnsmasq[331427]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:25 localhost dnsmasq[331427]: warning: no upstream servers configured Oct 5 06:06:25 localhost dnsmasq-dhcp[331427]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:25 localhost dnsmasq[331427]: read /var/lib/neutron/dhcp/0c19f064-847f-4189-9d79-0a336cef79ee/addn_hosts - 0 addresses Oct 5 06:06:25 localhost dnsmasq-dhcp[331427]: read /var/lib/neutron/dhcp/0c19f064-847f-4189-9d79-0a336cef79ee/host Oct 5 06:06:25 localhost dnsmasq-dhcp[331427]: read /var/lib/neutron/dhcp/0c19f064-847f-4189-9d79-0a336cef79ee/opts Oct 5 06:06:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:25.359 272040 INFO neutron.agent.dhcp.agent [None req-ed21dcbc-ec6f-48dc-8962-a58374111222 - - - - - -] DHCP configuration for ports {'fbe929b2-6c0c-490d-9c1c-eb4fdea7bf7a'} is completed#033[00m Oct 5 06:06:25 localhost dnsmasq[331427]: read /var/lib/neutron/dhcp/0c19f064-847f-4189-9d79-0a336cef79ee/addn_hosts - 0 addresses Oct 5 06:06:25 localhost dnsmasq-dhcp[331427]: read /var/lib/neutron/dhcp/0c19f064-847f-4189-9d79-0a336cef79ee/host Oct 5 06:06:25 localhost dnsmasq-dhcp[331427]: read /var/lib/neutron/dhcp/0c19f064-847f-4189-9d79-0a336cef79ee/opts Oct 5 06:06:25 localhost podman[331443]: 2025-10-05 10:06:25.524935485 +0000 UTC m=+0.060164975 container kill 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:06:25 localhost ovn_controller[157794]: 2025-10-05T10:06:25Z|00275|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:25 localhost nova_compute[297021]: 2025-10-05 10:06:25.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e135 do_prune osdmap full prune enabled Oct 5 06:06:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e136 e136: 6 total, 6 up, 6 in Oct 5 06:06:25 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e136: 6 total, 6 up, 6 in Oct 5 06:06:25 localhost dnsmasq[331427]: exiting on receipt of SIGTERM Oct 5 06:06:25 localhost podman[331478]: 2025-10-05 10:06:25.914546206 +0000 UTC m=+0.058265264 container kill 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:06:25 localhost systemd[1]: libpod-55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733.scope: Deactivated successfully. Oct 5 06:06:25 localhost podman[331491]: 2025-10-05 10:06:25.995522085 +0000 UTC m=+0.065852140 container died 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:26 localhost podman[331491]: 2025-10-05 10:06:26.029920829 +0000 UTC m=+0.100250844 container cleanup 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:06:26 localhost systemd[1]: libpod-conmon-55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733.scope: Deactivated successfully. Oct 5 06:06:26 localhost podman[331493]: 2025-10-05 10:06:26.075000793 +0000 UTC m=+0.137809133 container remove 55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c19f064-847f-4189-9d79-0a336cef79ee, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:06:26 localhost nova_compute[297021]: 2025-10-05 10:06:26.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:26 localhost kernel: device tapa810c5ec-1c left promiscuous mode Oct 5 06:06:26 localhost nova_compute[297021]: 2025-10-05 10:06:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:26 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:26.132 272040 INFO neutron.agent.dhcp.agent [None req-a34cd026-3a8a-4a08-b0ad-30e62b3d106b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:26 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:26.133 272040 INFO neutron.agent.dhcp.agent [None req-a34cd026-3a8a-4a08-b0ad-30e62b3d106b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:26 localhost systemd[1]: var-lib-containers-storage-overlay-db37055525690b420aee0e85f4cf20db83234b1802f4e0faa5931bfe70ca2ce9-merged.mount: Deactivated successfully. Oct 5 06:06:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55a2a8bbadb532c89f617b3c9c0751c24d97c1e1598274bba30084ba45770733-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:26 localhost systemd[1]: run-netns-qdhcp\x2d0c19f064\x2d847f\x2d4189\x2d9d79\x2d0a336cef79ee.mount: Deactivated successfully. Oct 5 06:06:26 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:26.432 2 INFO neutron.agent.securitygroups_rpc [None req-5318ebe2-2095-4bac-986a-b46f6b98b717 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e136 do_prune osdmap full prune enabled Oct 5 06:06:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e137 e137: 6 total, 6 up, 6 in Oct 5 06:06:26 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e137: 6 total, 6 up, 6 in Oct 5 06:06:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:06:27 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:06:27 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:27.722 272040 INFO neutron.agent.linux.ip_lib [None req-d3ccf8ea-9e9c-4d0e-b7f9-608ba6a065d1 - - - - - -] Device tapfb1d0bf2-33 cannot be used as it has no MAC address#033[00m Oct 5 06:06:27 localhost nova_compute[297021]: 2025-10-05 10:06:27.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:27 localhost kernel: device tapfb1d0bf2-33 entered promiscuous mode Oct 5 06:06:27 localhost ovn_controller[157794]: 2025-10-05T10:06:27Z|00276|binding|INFO|Claiming lport fb1d0bf2-336a-4281-a08e-0042c1ee64d1 for this chassis. Oct 5 06:06:27 localhost ovn_controller[157794]: 2025-10-05T10:06:27Z|00277|binding|INFO|fb1d0bf2-336a-4281-a08e-0042c1ee64d1: Claiming unknown Oct 5 06:06:27 localhost nova_compute[297021]: 2025-10-05 10:06:27.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:27 localhost NetworkManager[5981]: [1759658787.7587] manager: (tapfb1d0bf2-33): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Oct 5 06:06:27 localhost systemd-udevd[331619]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:27.767 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3e41d61a-8e91-4a22-b3b2-96f0f656d396', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e41d61a-8e91-4a22-b3b2-96f0f656d396', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac9f35fe-f6f7-41a6-9126-781e0164cc70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fb1d0bf2-336a-4281-a08e-0042c1ee64d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:27.769 163434 INFO neutron.agent.ovn.metadata.agent [-] Port fb1d0bf2-336a-4281-a08e-0042c1ee64d1 in datapath 3e41d61a-8e91-4a22-b3b2-96f0f656d396 bound to our chassis#033[00m Oct 5 06:06:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:27.771 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3e41d61a-8e91-4a22-b3b2-96f0f656d396 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:27.772 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e45bbf07-ae50-43f4-a267-588d9a7bdcf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e137 do_prune osdmap full prune enabled Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost ovn_controller[157794]: 2025-10-05T10:06:27Z|00278|binding|INFO|Setting lport fb1d0bf2-336a-4281-a08e-0042c1ee64d1 ovn-installed in OVS Oct 5 06:06:27 localhost ovn_controller[157794]: 2025-10-05T10:06:27Z|00279|binding|INFO|Setting lport fb1d0bf2-336a-4281-a08e-0042c1ee64d1 up in Southbound Oct 5 06:06:27 localhost nova_compute[297021]: 2025-10-05 10:06:27.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:06:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:06:27 localhost nova_compute[297021]: 2025-10-05 10:06:27.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e138 e138: 6 total, 6 up, 6 in Oct 5 06:06:27 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e138: 6 total, 6 up, 6 in Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost journal[237931]: ethtool ioctl error on tapfb1d0bf2-33: No such device Oct 5 06:06:27 localhost nova_compute[297021]: 2025-10-05 10:06:27.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:27 localhost nova_compute[297021]: 2025-10-05 10:06:27.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:28 localhost podman[331690]: Oct 5 06:06:28 localhost podman[331690]: 2025-10-05 10:06:28.779258395 +0000 UTC m=+0.089994045 container create e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:06:28 localhost systemd[1]: Started libpod-conmon-e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e.scope. Oct 5 06:06:28 localhost podman[331690]: 2025-10-05 10:06:28.735672561 +0000 UTC m=+0.046408201 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:28 localhost systemd[1]: tmp-crun.uwCWM5.mount: Deactivated successfully. Oct 5 06:06:28 localhost systemd[1]: Started libcrun container. Oct 5 06:06:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced5058f876ccbac284e2d251db44e363e60ce9c31c9d193fa29717f1336b49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:28 localhost podman[331690]: 2025-10-05 10:06:28.873260017 +0000 UTC m=+0.183995627 container init e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:28 localhost podman[331690]: 2025-10-05 10:06:28.881988394 +0000 UTC m=+0.192724014 container start e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:06:28 localhost dnsmasq[331709]: started, version 2.85 cachesize 150 Oct 5 06:06:28 localhost dnsmasq[331709]: DNS service limited to local subnets Oct 5 06:06:28 localhost dnsmasq[331709]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:28 localhost dnsmasq[331709]: warning: no upstream servers configured Oct 5 06:06:28 localhost dnsmasq-dhcp[331709]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:06:28 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/addn_hosts - 0 addresses Oct 5 06:06:28 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/host Oct 5 06:06:28 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/opts Oct 5 06:06:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:29.137 272040 INFO neutron.agent.dhcp.agent [None req-7ebe29cc-fb4f-4f99-9b76-81044b98c73f - - - - - -] DHCP configuration for ports {'ed7c4069-bf27-43b9-9958-0fedf711c4b7'} is completed#033[00m Oct 5 06:06:29 localhost nova_compute[297021]: 2025-10-05 10:06:29.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:29 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:29.874 2 INFO neutron.agent.securitygroups_rpc [None req-e54b65f2-7173-4862-a9ae-d94123481585 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:30 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:30.089 2 INFO neutron.agent.securitygroups_rpc [None req-e54b65f2-7173-4862-a9ae-d94123481585 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:30.559 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:30Z, description=, device_id=dfa52e47-d33f-45ec-82d0-eae6640e0763, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1c951ce2-e389-4022-ac56-61cd475f660f, ip_allocation=immediate, mac_address=fa:16:3e:30:c9:d9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:25Z, description=, dns_domain=, id=3e41d61a-8e91-4a22-b3b2-96f0f656d396, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-251768300, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40603, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1695, status=ACTIVE, subnets=['7f568d78-10dd-40d4-ba8b-f51e011daba3'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:26Z, vlan_transparent=None, network_id=3e41d61a-8e91-4a22-b3b2-96f0f656d396, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1738, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:30Z on network 3e41d61a-8e91-4a22-b3b2-96f0f656d396#033[00m Oct 5 06:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:06:30 localhost podman[331710]: 2025-10-05 10:06:30.678078112 +0000 UTC m=+0.083308143 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 06:06:30 localhost podman[331710]: 2025-10-05 10:06:30.693431719 +0000 UTC m=+0.098661760 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 5 06:06:30 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:06:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:30.711 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:30 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:30.724 2 INFO neutron.agent.securitygroups_rpc [None req-64668cd4-df0c-434e-b55d-562cf94e8983 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:30 localhost dnsmasq[330395]: exiting on receipt of SIGTERM Oct 5 06:06:30 localhost podman[331755]: 2025-10-05 10:06:30.836584796 +0000 UTC m=+0.060148544 container kill 9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-680567b1-9b84-4077-a926-3629810550c9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:30 localhost systemd[1]: libpod-9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3.scope: Deactivated successfully. Oct 5 06:06:30 localhost podman[331768]: 2025-10-05 10:06:30.896452522 +0000 UTC m=+0.066548458 container kill e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:30 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/addn_hosts - 1 addresses Oct 5 06:06:30 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/host Oct 5 06:06:30 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/opts Oct 5 06:06:30 localhost podman[331783]: 2025-10-05 10:06:30.919291953 +0000 UTC m=+0.058193352 container died 9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-680567b1-9b84-4077-a926-3629810550c9, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:06:30 localhost podman[331783]: 2025-10-05 10:06:30.972376825 +0000 UTC m=+0.111278184 container remove 9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-680567b1-9b84-4077-a926-3629810550c9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:30 localhost systemd[1]: libpod-conmon-9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3.scope: Deactivated successfully. Oct 5 06:06:31 localhost kernel: device tapefe4c880-c4 left promiscuous mode Oct 5 06:06:31 localhost ovn_controller[157794]: 2025-10-05T10:06:31Z|00280|binding|INFO|Releasing lport efe4c880-c4ce-4ab3-979f-d5b97bc71b68 from this chassis (sb_readonly=0) Oct 5 06:06:31 localhost nova_compute[297021]: 2025-10-05 10:06:31.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:31 localhost ovn_controller[157794]: 2025-10-05T10:06:31Z|00281|binding|INFO|Setting lport efe4c880-c4ce-4ab3-979f-d5b97bc71b68 down in Southbound Oct 5 06:06:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:31.043 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-680567b1-9b84-4077-a926-3629810550c9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-680567b1-9b84-4077-a926-3629810550c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dacd75da-185a-4233-9770-d81da152ca4c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=efe4c880-c4ce-4ab3-979f-d5b97bc71b68) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:31.045 163434 INFO neutron.agent.ovn.metadata.agent [-] Port efe4c880-c4ce-4ab3-979f-d5b97bc71b68 in datapath 680567b1-9b84-4077-a926-3629810550c9 unbound from our chassis#033[00m Oct 5 06:06:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:31.047 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 680567b1-9b84-4077-a926-3629810550c9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:31 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:31.048 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[57a474d4-8f40-42d6-a12c-b0565ab1e3a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:31 localhost nova_compute[297021]: 2025-10-05 10:06:31.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:31 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:31.610 272040 INFO neutron.agent.dhcp.agent [None req-6c5b94ab-b935-41ca-87c0-38ae43720341 - - - - - -] DHCP configuration for ports {'1c951ce2-e389-4022-ac56-61cd475f660f'} is completed#033[00m Oct 5 06:06:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e138 do_prune osdmap full prune enabled Oct 5 06:06:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e139 e139: 6 total, 6 up, 6 in Oct 5 06:06:31 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e139: 6 total, 6 up, 6 in Oct 5 06:06:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:06:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:06:31 localhost systemd[1]: tmp-crun.WE5t8x.mount: Deactivated successfully. Oct 5 06:06:31 localhost systemd[1]: var-lib-containers-storage-overlay-dcdf7978942589f266d1b04d907484aea7690b4991548adf7b9451c1b72096d5-merged.mount: Deactivated successfully. Oct 5 06:06:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e72d4af0bd9c5089943214fdaa3940c4d243aef05528370863c8c37234196b3-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:32.042 272040 INFO neutron.agent.dhcp.agent [None req-1f92c086-59e2-4e85-8d5e-044661888d3f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:32 localhost systemd[1]: run-netns-qdhcp\x2d680567b1\x2d9b84\x2d4077\x2da926\x2d3629810550c9.mount: Deactivated successfully. Oct 5 06:06:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:32.177 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:32.352 2 INFO neutron.agent.securitygroups_rpc [None req-5d192d4f-b77f-4667-88b6-26c01da73471 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:32.633 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:32 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:06:33 localhost ovn_controller[157794]: 2025-10-05T10:06:33Z|00282|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:33 localhost nova_compute[297021]: 2025-10-05 10:06:33.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e139 do_prune osdmap full prune enabled Oct 5 06:06:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e140 e140: 6 total, 6 up, 6 in Oct 5 06:06:33 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e140: 6 total, 6 up, 6 in Oct 5 06:06:34 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:34.158 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:30Z, description=, device_id=dfa52e47-d33f-45ec-82d0-eae6640e0763, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1c951ce2-e389-4022-ac56-61cd475f660f, ip_allocation=immediate, mac_address=fa:16:3e:30:c9:d9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:25Z, description=, dns_domain=, id=3e41d61a-8e91-4a22-b3b2-96f0f656d396, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-251768300, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40603, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1695, status=ACTIVE, subnets=['7f568d78-10dd-40d4-ba8b-f51e011daba3'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:26Z, vlan_transparent=None, network_id=3e41d61a-8e91-4a22-b3b2-96f0f656d396, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1738, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:30Z on network 3e41d61a-8e91-4a22-b3b2-96f0f656d396#033[00m Oct 5 06:06:34 localhost podman[331840]: 2025-10-05 10:06:34.379797092 +0000 UTC m=+0.063420164 container kill e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:34 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/addn_hosts - 1 addresses Oct 5 06:06:34 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/host Oct 5 06:06:34 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/opts Oct 5 06:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:06:34 localhost systemd[1]: tmp-crun.dqt6iX.mount: Deactivated successfully. Oct 5 06:06:34 localhost podman[331855]: 2025-10-05 10:06:34.503540772 +0000 UTC m=+0.097545130 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:06:34 localhost podman[331855]: 2025-10-05 10:06:34.546802347 +0000 UTC m=+0.140806725 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:06:34 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:06:34 localhost nova_compute[297021]: 2025-10-05 10:06:34.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:34 localhost nova_compute[297021]: 2025-10-05 10:06:34.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:34 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:34.856 272040 INFO neutron.agent.dhcp.agent [None req-99121a87-9472-4014-92e7-1684b5e914ab - - - - - -] DHCP configuration for ports {'1c951ce2-e389-4022-ac56-61cd475f660f'} is completed#033[00m Oct 5 06:06:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:06:36 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2987570623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:06:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:06:36 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2987570623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:06:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:37 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:37.178 2 INFO neutron.agent.securitygroups_rpc [None req-caada808-5cf9-4ee6-be88-f30f7fecba19 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:06:37 localhost systemd[1]: tmp-crun.5ByUYA.mount: Deactivated successfully. Oct 5 06:06:37 localhost podman[331887]: 2025-10-05 10:06:37.679227616 +0000 UTC m=+0.085736450 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:37 localhost podman[331887]: 2025-10-05 10:06:37.691815838 +0000 UTC m=+0.098324692 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:37 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.840 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.841 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.849 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c7c897ae-b00b-441c-a1b2-85a8d94c5fe3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.842600', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbd185c8-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '3f2a067443c7c5d8bc378b603c491aa8f6a8e8fa4939b7b691d5df779b8d6aa6'}]}, 'timestamp': '2025-10-05 10:06:38.850723', '_unique_id': '00e663e64efb4fa5b8ae49db1fcbece5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.852 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.854 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.875 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 16880000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9684eb83-4c5d-4cfc-aa5d-e7a425063d9b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16880000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:06:38.854642', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'fbd56f94-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.099305733, 'message_signature': 'dd57e2f2124c381c74f691496df2e29db9726b35ae81a0464193694343cc8bb0'}]}, 'timestamp': '2025-10-05 10:06:38.876130', '_unique_id': '5aaec70a52a44c36a0f4e931fa20e394'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.877 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.878 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.900 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.901 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d51f9d8-3188-48a8-9233-750faae4165d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.878765', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbd958d4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '7e393f0ae29d0a96c3ca88554e8dc237c3b9853e04c71edf95a65991618726a6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.878765', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbd96d10-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '83fa0f918558ca4c30ddda605ce42144c09f91d322281f197ac9a86341ee6536'}]}, 'timestamp': '2025-10-05 10:06:38.902213', '_unique_id': '0b379e9a394644d7850210ebf22edfed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.903 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.905 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.905 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e4820c0b-2fb8-4c33-91e3-5e516a00b75e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.905134', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbd9f17c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '4b3efe65cc340372199eeaba48009dcfd0e49a84533f177ea8437cb186a49967'}]}, 'timestamp': '2025-10-05 10:06:38.905647', '_unique_id': 'e54a0fbdd5fd4bf5ae7d2aa0b6f47f8b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.908 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.908 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1a30b939-40c7-4c3f-8c7e-9347e7866fc4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:06:38.908419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'fbda7340-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.099305733, 'message_signature': '0a1cd79969adbd500c97fb3216ef79b6347901af95c3b21122a99038e88d7a9f'}]}, 'timestamp': '2025-10-05 10:06:38.909039', '_unique_id': '8cae9e215a374f8b9c1401bf1ee9832a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.911 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.911 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8eec4ebc-88b0-440a-aba9-8752f8f79edb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.911257', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbdae0dc-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '7fcb9f923d25bb6840c8cdd05ebac4e2106fadb640afac72b2f3d49f5adb88aa'}]}, 'timestamp': '2025-10-05 10:06:38.911796', '_unique_id': '2d6f1a33eedd464faee9a04639c158db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.913 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.914 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.914 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ffb20634-c926-4fe7-9455-12398c50785d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.914016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbdb4bf8-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '845fd073060f91d6469187410ab03bc4e9b66336df5813849e3c9b6028738cde'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.914016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbdb5dbe-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': 'b568fd5646353ec75d27f36a6fc26ad9748de952bb843eb38a2ec5f8d924ae81'}]}, 'timestamp': '2025-10-05 10:06:38.914909', '_unique_id': '977b7efe20b04ab19b3ec2ff1e71253f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.916 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8b96c3f7-7e1b-485a-aeed-6a2d3eed944a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.917063', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbdbc27c-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '93ff50e7e1c2d414746d29d3c5a136f20ee316011dfd1ffe7cc56c96d90c2e74'}]}, 'timestamp': '2025-10-05 10:06:38.917548', '_unique_id': '56a66937f66d4cfaa900a6c991b9c041'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.919 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '98c5a6fd-5e69-4780-9b96-db8550526395', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.920071', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbdc37d4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '51baa1880255eac970565916214724e82c8ded88a23b5cb730bd359c457602b6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.920071', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbdc494a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '8c1c1d4d31b1bc9808a73dd7257909cbd3cd4cecf97f3cec0748e505e6cb32a8'}]}, 'timestamp': '2025-10-05 10:06:38.920937', '_unique_id': 'cc687719174a4778b5be04ff37ab99a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.922 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df96d932-0c84-41b7-9a38-d2316fdeba7c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.923056', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbdcada4-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': 'b031bd566f55533244857b733faeafa9d307ca5678d0933d468695706e7d3fd6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.923056', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbdcbef2-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': 'b09b45b63c0490b3229bd5e157ceae6235f702a9d3a38ee691a4c6a84f94bed4'}]}, 'timestamp': '2025-10-05 10:06:38.923948', '_unique_id': '0bb4b78ba6cc4d4698f20ff42505937d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.925 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1403b5d6-670b-4343-b74e-4235e68811a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.926034', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbdd20ea-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '72094d39525c16824f050c0e748d7b9a23200658f4e584996a1a638e2a7cf07d'}]}, 'timestamp': '2025-10-05 10:06:38.926520', '_unique_id': '0d3d13fcefb24f8faf4b9bd6ba97bc85'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.927 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.928 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.928 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ab922b59-f584-4c74-a3f1-eb5cfe31e4b4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.928573', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbdd83fa-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '3e7ab76e95e70c90f0b3b35778be383d5b0bdd35ca3d8bc1394c16b34d96a091'}]}, 'timestamp': '2025-10-05 10:06:38.929021', '_unique_id': 'e35426ff53514d48a77344fed61df95e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.930 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '003becd7-46d6-47cf-b832-931f209ecd3b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.931058', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbdde5fc-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '45b38e66fa74f2e04174a2214356ec93f1fcc9cabf84cbe0556db12b78934423'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.931058', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbddffb0-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': 'eb4094f8985b5046b1f70a8df9ef6f403cf09af03e70fd8325bcd6205ee8dad5'}]}, 'timestamp': '2025-10-05 10:06:38.932167', '_unique_id': 'a9bb7cb6fbec4b8f8cda246ce5caf1ec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.934 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.934 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cb9a8d79-7225-4f52-901e-fdc9771f696c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.934258', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbde636a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': 'd33d05a9d3554bd200116f31ab2e33d80e4177c0d2eb78aed9f7c4f0904d9a96'}]}, 'timestamp': '2025-10-05 10:06:38.934741', '_unique_id': '3b38dc3b65f243788c783918a6b17dd0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.936 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d909a6c-7010-4dae-852b-9b98852c2bef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.936820', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbdec62a-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': 'ef00ef38bd9f467207cb1389bf1e52063dca298642853037d827613bb262c632'}]}, 'timestamp': '2025-10-05 10:06:38.937267', '_unique_id': '422159f45c6746b9ab85047aef504cc9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.939 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.939 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '45926ffa-960f-4065-86bb-7638a4c71edf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.939323', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbdf2944-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': 'a7dfb6403e1a1c5ee189a8de227cc518a36c20a3729b4a6d472f981fa5daaed2'}]}, 'timestamp': '2025-10-05 10:06:38.939806', '_unique_id': 'c7a00c6d4e7a46f29b3cef81cedda9e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.940 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.941 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.941 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2bb060e-872e-4456-83d5-7fcf0c39c5ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.941964', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbe14224-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.165912281, 'message_signature': '8663a34bafc20851046eb9487bbd3f0966d56ea36584c6ee1a84b611399e4090'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.941964', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbe153cc-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.165912281, 'message_signature': '8dde5fbb727afef5056125329417e2979ff9bdda0b1622de766e46fe4f7d136c'}]}, 'timestamp': '2025-10-05 10:06:38.953975', '_unique_id': '1321cdc2021e4a03b266c5f271f7996e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '58d2372e-a76f-471b-b3ea-4ff7e64423be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:06:38.956447', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'fbe1c582-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.066795819, 'message_signature': '62f62739ef4ff858e5fb9a6ca07c8edcb9f9b5150ff990dce2f826323626a85d'}]}, 'timestamp': '2025-10-05 10:06:38.956916', '_unique_id': '1290905edef247e6bed1bcfdd4cf1f3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.959 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '552de7f8-2979-40f7-8d25-8a9a85fd9c03', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.959365', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbe238dc-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.165912281, 'message_signature': '2e3aef0794f4907dad5b736a516666f5ce5eecbdbd25f20ed357270640dbe257'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.959365', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbe248cc-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.165912281, 'message_signature': '8a7e97b154e7a57821c20cf433f7c77fe5fc4304d764059c3356c1b82cc0c6c1'}]}, 'timestamp': '2025-10-05 10:06:38.960246', '_unique_id': 'eea6d34120c1403e8ed7c8a6770381da'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.962 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.962 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.962 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '504f881e-301a-4c10-a290-72ba854c71f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.962328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbe2abf0-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '21b5b118c977c11c574efebdc840322faf4005e1f52da8f161eba9bbd8374ace'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.962328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbe2bbcc-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.102686614, 'message_signature': '5bc2de2bb782fc8927a93cfc168021e1933b748b49c789534a866d3655eb5718'}]}, 'timestamp': '2025-10-05 10:06:38.963216', '_unique_id': '4be85f0c4eb04c21853e495d939ce86c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.964 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.965 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.965 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.965 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2634d81d-55c6-4d5a-ab36-b554e6232502', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:06:38.965299', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fbe31f86-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.165912281, 'message_signature': '6f6ecf2183362701bb84f9639bc188b128f23459572425b746478967806f69e8'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:06:38.965299', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fbe32f9e-a1d2-11f0-9396-fa163ec6f33d', 'monotonic_time': 12278.165912281, 'message_signature': '920a9d6e4c0bc46f6a16dcff4cc67e31bc18b7bc3d64acefc3d1afba235a3e39'}]}, 'timestamp': '2025-10-05 10:06:38.966157', '_unique_id': 'b162b14c94194c5b8bb5fb7db91771fe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:06:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:06:38.967 12 ERROR oslo_messaging.notify.messaging Oct 5 06:06:39 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:39.144 2 INFO neutron.agent.securitygroups_rpc [None req-c4d1c914-9f42-4764-bf0d-d13f6d9dcb90 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:39 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/addn_hosts - 0 addresses Oct 5 06:06:39 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/host Oct 5 06:06:39 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/3e41d61a-8e91-4a22-b3b2-96f0f656d396/opts Oct 5 06:06:39 localhost podman[331921]: 2025-10-05 10:06:39.506112159 +0000 UTC m=+0.062482277 container kill e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:39 localhost ovn_controller[157794]: 2025-10-05T10:06:39Z|00283|binding|INFO|Releasing lport fb1d0bf2-336a-4281-a08e-0042c1ee64d1 from this chassis (sb_readonly=0) Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:06:39 localhost kernel: device tapfb1d0bf2-33 left promiscuous mode Oct 5 06:06:39 localhost ovn_controller[157794]: 2025-10-05T10:06:39Z|00284|binding|INFO|Setting lport fb1d0bf2-336a-4281-a08e-0042c1ee64d1 down in Southbound Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:06:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:39.851 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3e41d61a-8e91-4a22-b3b2-96f0f656d396', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e41d61a-8e91-4a22-b3b2-96f0f656d396', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac9f35fe-f6f7-41a6-9126-781e0164cc70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fb1d0bf2-336a-4281-a08e-0042c1ee64d1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:39.853 163434 INFO neutron.agent.ovn.metadata.agent [-] Port fb1d0bf2-336a-4281-a08e-0042c1ee64d1 in datapath 3e41d61a-8e91-4a22-b3b2-96f0f656d396 unbound from our chassis#033[00m Oct 5 06:06:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:39.855 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e41d61a-8e91-4a22-b3b2-96f0f656d396, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:39.856 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b529b784-5880-4a3c-b8cb-e9a2c40f493d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:39 localhost nova_compute[297021]: 2025-10-05 10:06:39.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:40.100 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:40 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:40.229 2 INFO neutron.agent.securitygroups_rpc [None req-683e8cfe-fbca-47cb-9979-a4649326b1aa 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:06:40 localhost podman[331945]: 2025-10-05 10:06:40.682015265 +0000 UTC m=+0.081349731 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:06:40 localhost podman[331945]: 2025-10-05 10:06:40.697881555 +0000 UTC m=+0.097216031 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6) Oct 5 06:06:40 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:06:41 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:41.438 2 INFO neutron.agent.securitygroups_rpc [None req-e675479c-0810-4921-b4fc-eddb22a96c71 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e140 do_prune osdmap full prune enabled Oct 5 06:06:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e141 e141: 6 total, 6 up, 6 in Oct 5 06:06:41 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e141: 6 total, 6 up, 6 in Oct 5 06:06:41 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:41.872 2 INFO neutron.agent.securitygroups_rpc [None req-c30ba9e1-2900-4469-8322-eb8944f17634 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:42 localhost dnsmasq[331709]: exiting on receipt of SIGTERM Oct 5 06:06:42 localhost podman[331982]: 2025-10-05 10:06:42.862685297 +0000 UTC m=+0.065844059 container kill e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:42 localhost systemd[1]: libpod-e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e.scope: Deactivated successfully. Oct 5 06:06:42 localhost podman[331994]: 2025-10-05 10:06:42.935014301 +0000 UTC m=+0.056585968 container died e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:42 localhost systemd[1]: tmp-crun.0eu326.mount: Deactivated successfully. Oct 5 06:06:42 localhost podman[331994]: 2025-10-05 10:06:42.970424372 +0000 UTC m=+0.091996009 container cleanup e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:06:42 localhost systemd[1]: libpod-conmon-e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e.scope: Deactivated successfully. Oct 5 06:06:43 localhost podman[331996]: 2025-10-05 10:06:43.017679605 +0000 UTC m=+0.133158976 container remove e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3e41d61a-8e91-4a22-b3b2-96f0f656d396, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:06:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:43.324 272040 INFO neutron.agent.dhcp.agent [None req-c5d3f91c-a44f-4004-bccb-cdc59e32f9f6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:43 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:43.338 2 INFO neutron.agent.securitygroups_rpc [None req-b207964d-c473-4b5c-80bf-217bcabe0d8d 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:43.381 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:43 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:43.383 2 INFO neutron.agent.securitygroups_rpc [None req-c80266e1-ce8b-42cd-8ef9-5134d4e4e2d9 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:06:43 localhost systemd[1]: var-lib-containers-storage-overlay-eced5058f876ccbac284e2d251db44e363e60ce9c31c9d193fa29717f1336b49-merged.mount: Deactivated successfully. Oct 5 06:06:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3dadb0f4f45d3c8fc60759e7101019604622a5616bbd5b04bef52019a76e02e-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:43 localhost systemd[1]: run-netns-qdhcp\x2d3e41d61a\x2d8e91\x2d4a22\x2db3b2\x2d96f0f656d396.mount: Deactivated successfully. Oct 5 06:06:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:43.877 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:43 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:43.882 2 INFO neutron.agent.securitygroups_rpc [None req-cee95e86-30a7-4cf2-9265-4836280d6987 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:43 localhost podman[332024]: 2025-10-05 10:06:43.925264953 +0000 UTC m=+0.083330733 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:06:43 localhost podman[332024]: 2025-10-05 10:06:43.963972265 +0000 UTC m=+0.122038015 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:06:43 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:06:44 localhost ovn_controller[157794]: 2025-10-05T10:06:44Z|00285|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:44 localhost nova_compute[297021]: 2025-10-05 10:06:44.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:44 localhost nova_compute[297021]: 2025-10-05 10:06:44.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:45.019 2 INFO neutron.agent.securitygroups_rpc [None req-975a16a1-2899-4f2c-9683-7c6f4127db83 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:45.773 2 INFO neutron.agent.securitygroups_rpc [None req-af9fe55a-377a-42e5-b25a-66e050161e7a 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:46.444 2 INFO neutron.agent.securitygroups_rpc [None req-54a9b384-5b39-4cb5-8dd8-7b8811d48589 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:46.601 2 INFO neutron.agent.securitygroups_rpc [None req-54a9b384-5b39-4cb5-8dd8-7b8811d48589 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:47 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:47.106 2 INFO neutron.agent.securitygroups_rpc [None req-77d35cad-696a-43a3-aeb7-2ddda9965d74 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:47 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:47.556 2 INFO neutron.agent.securitygroups_rpc [None req-cd6b925e-5b31-44b2-bb66-2a94b469b1f8 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e141 do_prune osdmap full prune enabled Oct 5 06:06:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e142 e142: 6 total, 6 up, 6 in Oct 5 06:06:47 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e142: 6 total, 6 up, 6 in Oct 5 06:06:48 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:48.308 272040 INFO neutron.agent.linux.ip_lib [None req-8871fc19-a352-401a-90c9-dc3f49ffbbc9 - - - - - -] Device tapc79d709b-85 cannot be used as it has no MAC address#033[00m Oct 5 06:06:48 localhost nova_compute[297021]: 2025-10-05 10:06:48.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:48 localhost kernel: device tapc79d709b-85 entered promiscuous mode Oct 5 06:06:48 localhost NetworkManager[5981]: [1759658808.3410] manager: (tapc79d709b-85): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Oct 5 06:06:48 localhost nova_compute[297021]: 2025-10-05 10:06:48.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:48 localhost ovn_controller[157794]: 2025-10-05T10:06:48Z|00286|binding|INFO|Claiming lport c79d709b-85dc-4431-8756-2b292d1533d6 for this chassis. Oct 5 06:06:48 localhost ovn_controller[157794]: 2025-10-05T10:06:48Z|00287|binding|INFO|c79d709b-85dc-4431-8756-2b292d1533d6: Claiming unknown Oct 5 06:06:48 localhost systemd-udevd[332058]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:48.352 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-b766a8ef-8608-4684-8549-de50497f1441', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b766a8ef-8608-4684-8549-de50497f1441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eaa64ecb-b5cb-432e-b97f-67f5a6bcfc8b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c79d709b-85dc-4431-8756-2b292d1533d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:48.354 163434 INFO neutron.agent.ovn.metadata.agent [-] Port c79d709b-85dc-4431-8756-2b292d1533d6 in datapath b766a8ef-8608-4684-8549-de50497f1441 bound to our chassis#033[00m Oct 5 06:06:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:48.355 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b766a8ef-8608-4684-8549-de50497f1441 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:48.357 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3c73f0d9-f345-4cd3-82dc-06cecbc6144e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost ovn_controller[157794]: 2025-10-05T10:06:48Z|00288|binding|INFO|Setting lport c79d709b-85dc-4431-8756-2b292d1533d6 ovn-installed in OVS Oct 5 06:06:48 localhost ovn_controller[157794]: 2025-10-05T10:06:48Z|00289|binding|INFO|Setting lport c79d709b-85dc-4431-8756-2b292d1533d6 up in Southbound Oct 5 06:06:48 localhost nova_compute[297021]: 2025-10-05 10:06:48.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost journal[237931]: ethtool ioctl error on tapc79d709b-85: No such device Oct 5 06:06:48 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:48.414 2 INFO neutron.agent.securitygroups_rpc [None req-ca8775b6-9a91-414b-91e2-5cdf2f4a29b5 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:48 localhost nova_compute[297021]: 2025-10-05 10:06:48.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:48 localhost nova_compute[297021]: 2025-10-05 10:06:48.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:48 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:48.471 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e142 do_prune osdmap full prune enabled Oct 5 06:06:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e143 e143: 6 total, 6 up, 6 in Oct 5 06:06:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e143: 6 total, 6 up, 6 in Oct 5 06:06:49 localhost podman[332129]: Oct 5 06:06:49 localhost podman[332129]: 2025-10-05 10:06:49.275231236 +0000 UTC m=+0.086532902 container create 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:06:49 localhost systemd[1]: Started libpod-conmon-3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6.scope. Oct 5 06:06:49 localhost podman[332129]: 2025-10-05 10:06:49.232151046 +0000 UTC m=+0.043452742 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:49 localhost systemd[1]: Started libcrun container. Oct 5 06:06:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f64063f9a4ef225878990f035c26f178f73a3a80bd3343aa3149f1f7a90ae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:49 localhost podman[332129]: 2025-10-05 10:06:49.355012363 +0000 UTC m=+0.166314019 container init 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:06:49 localhost podman[332129]: 2025-10-05 10:06:49.365174828 +0000 UTC m=+0.176476494 container start 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:06:49 localhost dnsmasq[332158]: started, version 2.85 cachesize 150 Oct 5 06:06:49 localhost dnsmasq[332158]: DNS service limited to local subnets Oct 5 06:06:49 localhost dnsmasq[332158]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:49 localhost dnsmasq[332158]: warning: no upstream servers configured Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:49 localhost dnsmasq[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/addn_hosts - 0 addresses Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/host Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/opts Oct 5 06:06:49 localhost podman[332143]: 2025-10-05 10:06:49.42747831 +0000 UTC m=+0.103383098 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:06:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:49.428 272040 INFO neutron.agent.dhcp.agent [None req-8871fc19-a352-401a-90c9-dc3f49ffbbc9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3efc02fd-78e5-47f6-82d6-ebe740f6a41f, ip_allocation=immediate, mac_address=fa:16:3e:9a:44:84, name=tempest-PortsIpV6TestJSON-138828475, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:46Z, description=, dns_domain=, id=b766a8ef-8608-4684-8549-de50497f1441, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1008954387, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49332, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1804, status=ACTIVE, subnets=['57200135-15e0-4cd8-bcaf-9e6777306716'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:06:46Z, vlan_transparent=None, network_id=b766a8ef-8608-4684-8549-de50497f1441, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1813, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:06:47Z on network b766a8ef-8608-4684-8549-de50497f1441#033[00m Oct 5 06:06:49 localhost podman[332143]: 2025-10-05 10:06:49.464881966 +0000 UTC m=+0.140786764 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:06:49 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:06:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:49.485 272040 INFO neutron.agent.dhcp.agent [None req-adacd611-ffa0-4b6d-abb0-3c4368193006 - - - - - -] DHCP configuration for ports {'5aa522d7-0de1-4d79-ae94-32e6e7a45369'} is completed#033[00m Oct 5 06:06:49 localhost dnsmasq[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/addn_hosts - 1 addresses Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/host Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/opts Oct 5 06:06:49 localhost podman[332190]: 2025-10-05 10:06:49.616942236 +0000 UTC m=+0.057215645 container kill 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:06:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:49.822 272040 INFO neutron.agent.dhcp.agent [None req-ba425225-8975-44ef-a3c4-17ab917b918b - - - - - -] DHCP configuration for ports {'3efc02fd-78e5-47f6-82d6-ebe740f6a41f'} is completed#033[00m Oct 5 06:06:49 localhost nova_compute[297021]: 2025-10-05 10:06:49.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:49 localhost dnsmasq[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/addn_hosts - 0 addresses Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/host Oct 5 06:06:49 localhost dnsmasq-dhcp[332158]: read /var/lib/neutron/dhcp/b766a8ef-8608-4684-8549-de50497f1441/opts Oct 5 06:06:49 localhost podman[332230]: 2025-10-05 10:06:49.943622978 +0000 UTC m=+0.065239884 container kill 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:06:50 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:50.018 2 INFO neutron.agent.securitygroups_rpc [None req-3706c1d5-eca7-468e-9b54-93969000b928 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:50 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:50.598 2 INFO neutron.agent.securitygroups_rpc [None req-922985ae-8f2a-4562-a4ae-71819a7745cf 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e143 do_prune osdmap full prune enabled Oct 5 06:06:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e144 e144: 6 total, 6 up, 6 in Oct 5 06:06:50 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e144: 6 total, 6 up, 6 in Oct 5 06:06:50 localhost dnsmasq[332158]: exiting on receipt of SIGTERM Oct 5 06:06:50 localhost podman[332268]: 2025-10-05 10:06:50.787895716 +0000 UTC m=+0.067321449 container kill 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:50 localhost systemd[1]: libpod-3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6.scope: Deactivated successfully. Oct 5 06:06:50 localhost podman[332281]: 2025-10-05 10:06:50.853559009 +0000 UTC m=+0.052927608 container died 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:06:50 localhost systemd[1]: tmp-crun.XZzKoT.mount: Deactivated successfully. Oct 5 06:06:50 localhost podman[332281]: 2025-10-05 10:06:50.941845367 +0000 UTC m=+0.141213956 container cleanup 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:50 localhost systemd[1]: libpod-conmon-3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6.scope: Deactivated successfully. Oct 5 06:06:50 localhost podman[332288]: 2025-10-05 10:06:50.963613028 +0000 UTC m=+0.149454510 container remove 3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b766a8ef-8608-4684-8549-de50497f1441, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:51 localhost ovn_controller[157794]: 2025-10-05T10:06:51Z|00290|binding|INFO|Releasing lport c79d709b-85dc-4431-8756-2b292d1533d6 from this chassis (sb_readonly=0) Oct 5 06:06:51 localhost nova_compute[297021]: 2025-10-05 10:06:51.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:51 localhost ovn_controller[157794]: 2025-10-05T10:06:51Z|00291|binding|INFO|Setting lport c79d709b-85dc-4431-8756-2b292d1533d6 down in Southbound Oct 5 06:06:51 localhost kernel: device tapc79d709b-85 left promiscuous mode Oct 5 06:06:51 localhost ovn_controller[157794]: 2025-10-05T10:06:51Z|00292|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:51.034 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-b766a8ef-8608-4684-8549-de50497f1441', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b766a8ef-8608-4684-8549-de50497f1441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eaa64ecb-b5cb-432e-b97f-67f5a6bcfc8b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c79d709b-85dc-4431-8756-2b292d1533d6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:51.040 163434 INFO neutron.agent.ovn.metadata.agent [-] Port c79d709b-85dc-4431-8756-2b292d1533d6 in datapath b766a8ef-8608-4684-8549-de50497f1441 unbound from our chassis#033[00m Oct 5 06:06:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:51.042 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b766a8ef-8608-4684-8549-de50497f1441 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:51 localhost nova_compute[297021]: 2025-10-05 10:06:51.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:51 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:51.043 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcd96ed-3693-445f-972b-d8e37942f55d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:51 localhost nova_compute[297021]: 2025-10-05 10:06:51.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:51 localhost nova_compute[297021]: 2025-10-05 10:06:51.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:51 localhost systemd[1]: var-lib-containers-storage-overlay-09f64063f9a4ef225878990f035c26f178f73a3a80bd3343aa3149f1f7a90ae3-merged.mount: Deactivated successfully. Oct 5 06:06:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3db1066bdd6f932a02c4a860637b0d020eadfede47fed238af6bbb5b47ca04c6-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:51 localhost systemd[1]: run-netns-qdhcp\x2db766a8ef\x2d8608\x2d4684\x2d8549\x2dde50497f1441.mount: Deactivated successfully. Oct 5 06:06:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:51.300 272040 INFO neutron.agent.dhcp.agent [None req-fe52fe29-2b6c-495b-a942-d18ddad12d79 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:51.301 272040 INFO neutron.agent.dhcp.agent [None req-fe52fe29-2b6c-495b-a942-d18ddad12d79 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:51 localhost podman[248506]: time="2025-10-05T10:06:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:06:51 localhost podman[248506]: @ - - [05/Oct/2025:10:06:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:06:51 localhost podman[248506]: @ - - [05/Oct/2025:10:06:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19372 "" "Go-http-client/1.1" Oct 5 06:06:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:51 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:51.660 2 INFO neutron.agent.securitygroups_rpc [None req-de821bfe-e6c2-4d08-8692-047608f1121d 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e144 do_prune osdmap full prune enabled Oct 5 06:06:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e145 e145: 6 total, 6 up, 6 in Oct 5 06:06:51 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e145: 6 total, 6 up, 6 in Oct 5 06:06:52 localhost openstack_network_exporter[250601]: ERROR 10:06:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:52 localhost openstack_network_exporter[250601]: ERROR 10:06:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:06:52 localhost openstack_network_exporter[250601]: ERROR 10:06:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:52 localhost openstack_network_exporter[250601]: ERROR 10:06:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:06:52 localhost openstack_network_exporter[250601]: Oct 5 06:06:52 localhost openstack_network_exporter[250601]: ERROR 10:06:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:06:52 localhost openstack_network_exporter[250601]: Oct 5 06:06:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:52.055 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:53 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:53.741 2 INFO neutron.agent.securitygroups_rpc [None req-0f6b70c8-8ba4-43a2-955f-5a485cb09cb4 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:06:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:06:54 localhost systemd[1]: tmp-crun.tUKI1s.mount: Deactivated successfully. Oct 5 06:06:54 localhost podman[332312]: 2025-10-05 10:06:54.677996913 +0000 UTC m=+0.085267038 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:06:54 localhost podman[332312]: 2025-10-05 10:06:54.6908015 +0000 UTC m=+0.098071645 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:06:54 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:06:54 localhost systemd[1]: tmp-crun.Ko7zf0.mount: Deactivated successfully. Oct 5 06:06:54 localhost podman[332313]: 2025-10-05 10:06:54.789685106 +0000 UTC m=+0.193703892 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 06:06:54 localhost podman[332313]: 2025-10-05 10:06:54.807831718 +0000 UTC m=+0.211850494 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd) Oct 5 06:06:54 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:06:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:54.860 272040 INFO neutron.agent.linux.ip_lib [None req-235fd694-1eb3-4b80-929c-051589cd59f7 - - - - - -] Device tap2da666e3-c2 cannot be used as it has no MAC address#033[00m Oct 5 06:06:54 localhost nova_compute[297021]: 2025-10-05 10:06:54.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:54 localhost nova_compute[297021]: 2025-10-05 10:06:54.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:06:54 localhost kernel: device tap2da666e3-c2 entered promiscuous mode Oct 5 06:06:54 localhost NetworkManager[5981]: [1759658814.9367] manager: (tap2da666e3-c2): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Oct 5 06:06:54 localhost nova_compute[297021]: 2025-10-05 10:06:54.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:54 localhost ovn_controller[157794]: 2025-10-05T10:06:54Z|00293|binding|INFO|Claiming lport 2da666e3-c295-4e79-a92e-717ef91f1f45 for this chassis. Oct 5 06:06:54 localhost ovn_controller[157794]: 2025-10-05T10:06:54Z|00294|binding|INFO|2da666e3-c295-4e79-a92e-717ef91f1f45: Claiming unknown Oct 5 06:06:54 localhost systemd-udevd[332359]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:54 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:54.952 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-a43bf973-867b-43e1-81f2-e85f8078374d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a43bf973-867b-43e1-81f2-e85f8078374d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9995ae9ec275409eab70e1b7587c3571', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=229c7a89-78a5-42c5-8d56-b5580c218c89, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2da666e3-c295-4e79-a92e-717ef91f1f45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:54 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:54.954 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 2da666e3-c295-4e79-a92e-717ef91f1f45 in datapath a43bf973-867b-43e1-81f2-e85f8078374d bound to our chassis#033[00m Oct 5 06:06:54 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:54.956 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a43bf973-867b-43e1-81f2-e85f8078374d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:54 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:54.957 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d23876-28fb-4fe9-9814-fda390e4d49a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:54 localhost ovn_controller[157794]: 2025-10-05T10:06:54Z|00295|binding|INFO|Setting lport 2da666e3-c295-4e79-a92e-717ef91f1f45 ovn-installed in OVS Oct 5 06:06:54 localhost ovn_controller[157794]: 2025-10-05T10:06:54Z|00296|binding|INFO|Setting lport 2da666e3-c295-4e79-a92e-717ef91f1f45 up in Southbound Oct 5 06:06:54 localhost nova_compute[297021]: 2025-10-05 10:06:54.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:55.066 2 INFO neutron.agent.securitygroups_rpc [None req-f33575a7-63e5-4033-bff4-513fe0684cd0 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:55.387 272040 INFO neutron.agent.linux.ip_lib [None req-2f4b827f-86bd-40b9-b58a-08c7bf9cc95c - - - - - -] Device tap05df4f25-00 cannot be used as it has no MAC address#033[00m Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost kernel: device tap05df4f25-00 entered promiscuous mode Oct 5 06:06:55 localhost NetworkManager[5981]: [1759658815.4231] manager: (tap05df4f25-00): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Oct 5 06:06:55 localhost ovn_controller[157794]: 2025-10-05T10:06:55Z|00297|binding|INFO|Claiming lport 05df4f25-004c-4eec-8a24-04eb659d113f for this chassis. Oct 5 06:06:55 localhost ovn_controller[157794]: 2025-10-05T10:06:55Z|00298|binding|INFO|05df4f25-004c-4eec-8a24-04eb659d113f: Claiming unknown Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:55.434 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-2bd78593-e7fa-411a-917c-bbefb7a9e8c9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd78593-e7fa-411a-917c-bbefb7a9e8c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6e1c06-d130-426e-b901-3f60565235d4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=05df4f25-004c-4eec-8a24-04eb659d113f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:55.437 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 05df4f25-004c-4eec-8a24-04eb659d113f in datapath 2bd78593-e7fa-411a-917c-bbefb7a9e8c9 bound to our chassis#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:55.439 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7bf02030-c1f7-428a-ba02-dfe34abf4d68 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:55.440 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd78593-e7fa-411a-917c-bbefb7a9e8c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:55.441 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[c874d7bd-c4e6-4472-a987-de9ab57c4bcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:55 localhost ovn_controller[157794]: 2025-10-05T10:06:55Z|00299|binding|INFO|Setting lport 05df4f25-004c-4eec-8a24-04eb659d113f ovn-installed in OVS Oct 5 06:06:55 localhost ovn_controller[157794]: 2025-10-05T10:06:55Z|00300|binding|INFO|Setting lport 05df4f25-004c-4eec-8a24-04eb659d113f up in Southbound Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost nova_compute[297021]: 2025-10-05 10:06:55.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:55.550 2 INFO neutron.agent.securitygroups_rpc [None req-028ca752-7a11-4005-953f-28a11d2ad44f cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:55 localhost podman[332447]: Oct 5 06:06:55 localhost podman[332447]: 2025-10-05 10:06:55.987782633 +0000 UTC m=+0.101097897 container create c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:06:56 localhost systemd[1]: Started libpod-conmon-c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6.scope. Oct 5 06:06:56 localhost podman[332447]: 2025-10-05 10:06:55.9401898 +0000 UTC m=+0.053505114 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:56 localhost systemd[1]: tmp-crun.5Z7zgH.mount: Deactivated successfully. Oct 5 06:06:56 localhost systemd[1]: Started libcrun container. Oct 5 06:06:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a69ea66d928b1855ad337a07219a4c0bbb3f01a2347f48853c7c385e23e1f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:56 localhost podman[332447]: 2025-10-05 10:06:56.071297441 +0000 UTC m=+0.184612705 container init c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:06:56 localhost podman[332447]: 2025-10-05 10:06:56.081112238 +0000 UTC m=+0.194427492 container start c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:56 localhost dnsmasq[332472]: started, version 2.85 cachesize 150 Oct 5 06:06:56 localhost dnsmasq[332472]: DNS service limited to local subnets Oct 5 06:06:56 localhost dnsmasq[332472]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:56 localhost dnsmasq[332472]: warning: no upstream servers configured Oct 5 06:06:56 localhost dnsmasq-dhcp[332472]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:56 localhost dnsmasq[332472]: read /var/lib/neutron/dhcp/a43bf973-867b-43e1-81f2-e85f8078374d/addn_hosts - 0 addresses Oct 5 06:06:56 localhost dnsmasq-dhcp[332472]: read /var/lib/neutron/dhcp/a43bf973-867b-43e1-81f2-e85f8078374d/host Oct 5 06:06:56 localhost dnsmasq-dhcp[332472]: read /var/lib/neutron/dhcp/a43bf973-867b-43e1-81f2-e85f8078374d/opts Oct 5 06:06:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:56.276 272040 INFO neutron.agent.dhcp.agent [None req-9617bcf9-5b25-4680-8a4c-5eff24d2e946 - - - - - -] DHCP configuration for ports {'84c87585-720a-4c8d-adc3-23567834d4f3'} is completed#033[00m Oct 5 06:06:56 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:56.292 2 INFO neutron.agent.securitygroups_rpc [None req-be1c75d1-6103-45bc-ac58-5c31557614fb cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:56 localhost podman[332509]: Oct 5 06:06:56 localhost dnsmasq[332472]: exiting on receipt of SIGTERM Oct 5 06:06:56 localhost podman[332523]: 2025-10-05 10:06:56.463826651 +0000 UTC m=+0.051603152 container kill c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:56 localhost systemd[1]: libpod-c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6.scope: Deactivated successfully. Oct 5 06:06:56 localhost podman[332509]: 2025-10-05 10:06:56.510723265 +0000 UTC m=+0.131992066 container create 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:06:56 localhost podman[332509]: 2025-10-05 10:06:56.416614569 +0000 UTC m=+0.037883380 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:56 localhost podman[332538]: 2025-10-05 10:06:56.532655801 +0000 UTC m=+0.058212432 container died c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:06:56 localhost systemd[1]: Started libpod-conmon-6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a.scope. Oct 5 06:06:56 localhost systemd[1]: Started libcrun container. Oct 5 06:06:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21236e8a5d08f3b13c9eb8782982e2105ca4144190d2c158a53c556a428f48cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:56 localhost podman[332538]: 2025-10-05 10:06:56.572524533 +0000 UTC m=+0.098081124 container cleanup c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:06:56 localhost systemd[1]: libpod-conmon-c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6.scope: Deactivated successfully. Oct 5 06:06:56 localhost podman[332509]: 2025-10-05 10:06:56.578767822 +0000 UTC m=+0.200036613 container init 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:56 localhost dnsmasq[332568]: started, version 2.85 cachesize 150 Oct 5 06:06:56 localhost dnsmasq[332568]: DNS service limited to local subnets Oct 5 06:06:56 localhost dnsmasq[332568]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:56 localhost dnsmasq[332568]: warning: no upstream servers configured Oct 5 06:06:56 localhost dnsmasq-dhcp[332568]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:06:56 localhost dnsmasq[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/addn_hosts - 0 addresses Oct 5 06:06:56 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/host Oct 5 06:06:56 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/opts Oct 5 06:06:56 localhost podman[332546]: 2025-10-05 10:06:56.615438098 +0000 UTC m=+0.123885285 container remove c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a43bf973-867b-43e1-81f2-e85f8078374d, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e145 do_prune osdmap full prune enabled Oct 5 06:06:56 localhost kernel: device tap2da666e3-c2 left promiscuous mode Oct 5 06:06:56 localhost nova_compute[297021]: 2025-10-05 10:06:56.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:56 localhost ovn_controller[157794]: 2025-10-05T10:06:56Z|00301|binding|INFO|Releasing lport 2da666e3-c295-4e79-a92e-717ef91f1f45 from this chassis (sb_readonly=0) Oct 5 06:06:56 localhost ovn_controller[157794]: 2025-10-05T10:06:56Z|00302|binding|INFO|Setting lport 2da666e3-c295-4e79-a92e-717ef91f1f45 down in Southbound Oct 5 06:06:56 localhost podman[332509]: 2025-10-05 10:06:56.663480352 +0000 UTC m=+0.284749133 container start 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:06:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e146 e146: 6 total, 6 up, 6 in Oct 5 06:06:56 localhost systemd[1]: var-lib-containers-storage-overlay-e5a69ea66d928b1855ad337a07219a4c0bbb3f01a2347f48853c7c385e23e1f3-merged.mount: Deactivated successfully. Oct 5 06:06:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c44ce92d9b8ea7fd08d021d73019a7391deb8563674cc5b1fdde67a418c1fad6-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e146: 6 total, 6 up, 6 in Oct 5 06:06:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:56.691 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-a43bf973-867b-43e1-81f2-e85f8078374d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a43bf973-867b-43e1-81f2-e85f8078374d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9995ae9ec275409eab70e1b7587c3571', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=229c7a89-78a5-42c5-8d56-b5580c218c89, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2da666e3-c295-4e79-a92e-717ef91f1f45) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:56 localhost nova_compute[297021]: 2025-10-05 10:06:56.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:56.694 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 2da666e3-c295-4e79-a92e-717ef91f1f45 in datapath a43bf973-867b-43e1-81f2-e85f8078374d unbound from our chassis#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:56.696 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a43bf973-867b-43e1-81f2-e85f8078374d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163429]: 2025-10-05 10:06:56.698 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f826c1-92b5-4919-8d20-bc37160d9f14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:56.734 272040 INFO neutron.agent.dhcp.agent [None req-e14473ec-927c-42ed-ac45-a1b1e485e53a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:55Z, description=, device_id=0d3146c4-2e7f-483d-b764-4c9d0d36d376, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=29abf5a8-1d09-4b46-9104-354dc38f420e, ip_allocation=immediate, mac_address=fa:16:3e:d6:46:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:51Z, description=, dns_domain=, id=2bd78593-e7fa-411a-917c-bbefb7a9e8c9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-329333966, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43495, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['d13f07c2-b312-4291-a2f8-018a9ecb75d1'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:53Z, vlan_transparent=None, network_id=2bd78593-e7fa-411a-917c-bbefb7a9e8c9, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1850, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:55Z on network 2bd78593-e7fa-411a-917c-bbefb7a9e8c9#033[00m Oct 5 06:06:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:56.858 272040 INFO neutron.agent.dhcp.agent [None req-d6dc4c1c-e265-4e72-a64a-0218a38838ed - - - - - -] DHCP configuration for ports {'954664f9-d971-4535-abb5-978e3a5addb1'} is completed#033[00m Oct 5 06:06:56 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:56.876 2 INFO neutron.agent.securitygroups_rpc [None req-9f697f19-19c5-4264-bc4a-40176852ecfc cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:56 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:56.961 272040 INFO neutron.agent.dhcp.agent [None req-87ba7586-74cf-4553-9d01-656c6b82b530 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:56 localhost systemd[1]: run-netns-qdhcp\x2da43bf973\x2d867b\x2d43e1\x2d81f2\x2de85f8078374d.mount: Deactivated successfully. Oct 5 06:06:57 localhost dnsmasq[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/addn_hosts - 1 addresses Oct 5 06:06:57 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/host Oct 5 06:06:57 localhost podman[332589]: 2025-10-05 10:06:57.001981216 +0000 UTC m=+0.073325592 container kill 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:06:57 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/opts Oct 5 06:06:57 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:57.332 272040 INFO neutron.agent.dhcp.agent [None req-b159f4bc-afcf-472d-9887-f5b50672f875 - - - - - -] DHCP configuration for ports {'29abf5a8-1d09-4b46-9104-354dc38f420e'} is completed#033[00m Oct 5 06:06:57 localhost neutron_sriov_agent[264984]: 2025-10-05 10:06:57.843 2 INFO neutron.agent.securitygroups_rpc [None req-a90c418c-ab0e-42ad-9460-79491727dda5 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:58.034 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:55Z, description=, device_id=0d3146c4-2e7f-483d-b764-4c9d0d36d376, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=29abf5a8-1d09-4b46-9104-354dc38f420e, ip_allocation=immediate, mac_address=fa:16:3e:d6:46:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:51Z, description=, dns_domain=, id=2bd78593-e7fa-411a-917c-bbefb7a9e8c9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-329333966, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43495, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['d13f07c2-b312-4291-a2f8-018a9ecb75d1'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:53Z, vlan_transparent=None, network_id=2bd78593-e7fa-411a-917c-bbefb7a9e8c9, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1850, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:06:55Z on network 2bd78593-e7fa-411a-917c-bbefb7a9e8c9#033[00m Oct 5 06:06:58 localhost podman[332627]: 2025-10-05 10:06:58.27690146 +0000 UTC m=+0.061243945 container kill 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:06:58 localhost dnsmasq[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/addn_hosts - 1 addresses Oct 5 06:06:58 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/host Oct 5 06:06:58 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/opts Oct 5 06:06:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:58.722 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:58 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:58.809 272040 INFO neutron.agent.dhcp.agent [None req-550e9f3c-8050-4c57-923b-e761a63a0e26 - - - - - -] DHCP configuration for ports {'29abf5a8-1d09-4b46-9104-354dc38f420e'} is completed#033[00m Oct 5 06:06:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e146 do_prune osdmap full prune enabled Oct 5 06:06:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e147 e147: 6 total, 6 up, 6 in Oct 5 06:06:58 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e147: 6 total, 6 up, 6 in Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.869291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818869384, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1346, "num_deletes": 258, "total_data_size": 1306358, "memory_usage": 1334112, "flush_reason": "Manual Compaction"} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818879338, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 1279056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28885, "largest_seqno": 30230, "table_properties": {"data_size": 1273020, "index_size": 3378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13471, "raw_average_key_size": 21, "raw_value_size": 1260709, "raw_average_value_size": 1976, "num_data_blocks": 147, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658736, "oldest_key_time": 1759658736, "file_creation_time": 1759658818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 10148 microseconds, and 4888 cpu microseconds. Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.879441) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 1279056 bytes OK Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.879463) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.881592) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.881639) EVENT_LOG_v1 {"time_micros": 1759658818881632, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.881661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1300218, prev total WAL file size 1300218, number of live WAL files 2. Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.883469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(1249KB)], [54(14MB)] Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818883513, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 16370720, "oldest_snapshot_seqno": -1} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 12534 keys, 14501061 bytes, temperature: kUnknown Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818966252, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 14501061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14432592, "index_size": 36048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31365, "raw_key_size": 338991, "raw_average_key_size": 27, "raw_value_size": 14222063, "raw_average_value_size": 1134, "num_data_blocks": 1335, "num_entries": 12534, "num_filter_entries": 12534, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.966581) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 14501061 bytes Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.972339) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.6 rd, 175.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(24.1) write-amplify(11.3) OK, records in: 13065, records dropped: 531 output_compression: NoCompression Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.972369) EVENT_LOG_v1 {"time_micros": 1759658818972355, "job": 32, "event": "compaction_finished", "compaction_time_micros": 82828, "compaction_time_cpu_micros": 42275, "output_level": 6, "num_output_files": 1, "total_output_size": 14501061, "num_input_records": 13065, "num_output_records": 12534, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818972854, "job": 32, "event": "table_file_deletion", "file_number": 56} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818975186, "job": 32, "event": "table_file_deletion", "file_number": 54} Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.883341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.975237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.975241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.975244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.975247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:06:58.975250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:59 localhost ovn_controller[157794]: 2025-10-05T10:06:59Z|00303|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:06:59 localhost nova_compute[297021]: 2025-10-05 10:06:59.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:06:59.699 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e147 do_prune osdmap full prune enabled Oct 5 06:06:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e148 e148: 6 total, 6 up, 6 in Oct 5 06:06:59 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e148: 6 total, 6 up, 6 in Oct 5 06:06:59 localhost nova_compute[297021]: 2025-10-05 10:06:59.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost dnsmasq[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/addn_hosts - 0 addresses Oct 5 06:07:01 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/host Oct 5 06:07:01 localhost dnsmasq-dhcp[332568]: read /var/lib/neutron/dhcp/2bd78593-e7fa-411a-917c-bbefb7a9e8c9/opts Oct 5 06:07:01 localhost podman[332664]: 2025-10-05 10:07:01.01165618 +0000 UTC m=+0.067745682 container kill 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:07:01 localhost systemd[1]: tmp-crun.E6r9GQ.mount: Deactivated successfully. Oct 5 06:07:01 localhost podman[332678]: 2025-10-05 10:07:01.150626453 +0000 UTC m=+0.109239027 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:07:01 localhost podman[332678]: 2025-10-05 10:07:01.181012529 +0000 UTC m=+0.139625043 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:01 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:07:01 localhost ovn_controller[157794]: 2025-10-05T10:07:01Z|00304|binding|INFO|Releasing lport 05df4f25-004c-4eec-8a24-04eb659d113f from this chassis (sb_readonly=0) Oct 5 06:07:01 localhost ovn_controller[157794]: 2025-10-05T10:07:01Z|00305|binding|INFO|Setting lport 05df4f25-004c-4eec-8a24-04eb659d113f down in Southbound Oct 5 06:07:01 localhost kernel: device tap05df4f25-00 left promiscuous mode Oct 5 06:07:01 localhost nova_compute[297021]: 2025-10-05 10:07:01.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:01.247 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-2bd78593-e7fa-411a-917c-bbefb7a9e8c9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd78593-e7fa-411a-917c-bbefb7a9e8c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6e1c06-d130-426e-b901-3f60565235d4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=05df4f25-004c-4eec-8a24-04eb659d113f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:01.249 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 05df4f25-004c-4eec-8a24-04eb659d113f in datapath 2bd78593-e7fa-411a-917c-bbefb7a9e8c9 unbound from our chassis#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:01.252 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd78593-e7fa-411a-917c-bbefb7a9e8c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:01.253 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ddfd19-4a6c-4a56-8628-3068e61efa4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:01 localhost nova_compute[297021]: 2025-10-05 10:07:01.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:01.546 2 INFO neutron.agent.securitygroups_rpc [None req-967fdf5d-46e1-48e2-b529-a5dcf26b47fa cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:01 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/21619120' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:01 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/21619120' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:02 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:02.253 2 INFO neutron.agent.securitygroups_rpc [None req-e438c14c-9f1f-4532-9e26-9190e47ce09e cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:02 localhost dnsmasq[332568]: exiting on receipt of SIGTERM Oct 5 06:07:02 localhost podman[332723]: 2025-10-05 10:07:02.601514976 +0000 UTC m=+0.058941121 container kill 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 06:07:02 localhost systemd[1]: libpod-6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a.scope: Deactivated successfully. Oct 5 06:07:02 localhost podman[332736]: 2025-10-05 10:07:02.673864321 +0000 UTC m=+0.055397055 container died 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:02 localhost podman[332736]: 2025-10-05 10:07:02.714869105 +0000 UTC m=+0.096401789 container cleanup 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:07:02 localhost systemd[1]: libpod-conmon-6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a.scope: Deactivated successfully. Oct 5 06:07:02 localhost podman[332737]: 2025-10-05 10:07:02.754189903 +0000 UTC m=+0.131844381 container remove 6d0e18a4113bcabac709993a84fe8c089960835501572e494a8de86459a3079a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bd78593-e7fa-411a-917c-bbefb7a9e8c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:03 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:03.171 272040 INFO neutron.agent.dhcp.agent [None req-682f7193-af1e-473d-939d-24f630f7f351 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:03 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:03.301 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:03 localhost systemd[1]: var-lib-containers-storage-overlay-21236e8a5d08f3b13c9eb8782982e2105ca4144190d2c158a53c556a428f48cc-merged.mount: Deactivated successfully. Oct 5 06:07:03 localhost systemd[1]: run-netns-qdhcp\x2d2bd78593\x2de7fa\x2d411a\x2d917c\x2dbbefb7a9e8c9.mount: Deactivated successfully. Oct 5 06:07:03 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:03.667 2 INFO neutron.agent.securitygroups_rpc [None req-113037f4-e6a1-483a-9c1d-2f96e0139405 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:03 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:03.672 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:04 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:04.036 2 INFO neutron.agent.securitygroups_rpc [None req-3496327f-e2d7-49dc-ab01-143e65f195ad 6ef678b66aca4c389c46bd32e9f75f44 8b0117c734aa4a26be5c16b9cc3abffe - - default default] Security group rule updated ['34787280-e67a-4595-a7a5-2948c88f70c0']#033[00m Oct 5 06:07:04 localhost ovn_controller[157794]: 2025-10-05T10:07:04Z|00306|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:04 localhost nova_compute[297021]: 2025-10-05 10:07:04.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:07:04 localhost podman[332763]: 2025-10-05 10:07:04.664213505 +0000 UTC m=+0.076662083 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible) Oct 5 06:07:04 localhost podman[332763]: 2025-10-05 10:07:04.70375852 +0000 UTC m=+0.116207088 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:04 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:07:04 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:04.798 2 INFO neutron.agent.securitygroups_rpc [None req-10105f5e-927c-4e7a-9302-4d7e601f16d3 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:04 localhost nova_compute[297021]: 2025-10-05 10:07:04.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e148 do_prune osdmap full prune enabled Oct 5 06:07:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 e149: 6 total, 6 up, 6 in Oct 5 06:07:06 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e149: 6 total, 6 up, 6 in Oct 5 06:07:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:07:07 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:07.935 272040 INFO neutron.agent.linux.ip_lib [None req-888584ff-56a2-4d85-bd89-9a87b04c0508 - - - - - -] Device tap2ed6a3ea-de cannot be used as it has no MAC address#033[00m Oct 5 06:07:07 localhost podman[332791]: 2025-10-05 10:07:07.950339928 +0000 UTC m=+0.091664749 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:07:07 localhost nova_compute[297021]: 2025-10-05 10:07:07.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:07 localhost kernel: device tap2ed6a3ea-de entered promiscuous mode Oct 5 06:07:07 localhost NetworkManager[5981]: [1759658827.9744] manager: (tap2ed6a3ea-de): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Oct 5 06:07:07 localhost nova_compute[297021]: 2025-10-05 10:07:07.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:07 localhost ovn_controller[157794]: 2025-10-05T10:07:07Z|00307|binding|INFO|Claiming lport 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd for this chassis. Oct 5 06:07:07 localhost ovn_controller[157794]: 2025-10-05T10:07:07Z|00308|binding|INFO|2ed6a3ea-defe-469a-bf8a-11d74d78d6fd: Claiming unknown Oct 5 06:07:07 localhost systemd-udevd[332817]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:07.989 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-5840a30a-b372-49ca-b438-2e4c61392707', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5840a30a-b372-49ca-b438-2e4c61392707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=434434f2-bb61-4db4-b978-cee9c27d0ab8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2ed6a3ea-defe-469a-bf8a-11d74d78d6fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:07.991 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd in datapath 5840a30a-b372-49ca-b438-2e4c61392707 bound to our chassis#033[00m Oct 5 06:07:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:07.992 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5840a30a-b372-49ca-b438-2e4c61392707 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:07 localhost podman[332791]: 2025-10-05 10:07:07.993116101 +0000 UTC m=+0.134440872 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:07 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:07.994 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b5f1c5-ddb9-4e80-be5e-034fc682b4b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:08 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost ovn_controller[157794]: 2025-10-05T10:07:08Z|00309|binding|INFO|Setting lport 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd ovn-installed in OVS Oct 5 06:07:08 localhost ovn_controller[157794]: 2025-10-05T10:07:08Z|00310|binding|INFO|Setting lport 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd up in Southbound Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost nova_compute[297021]: 2025-10-05 10:07:08.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost journal[237931]: ethtool ioctl error on tap2ed6a3ea-de: No such device Oct 5 06:07:08 localhost nova_compute[297021]: 2025-10-05 10:07:08.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:08 localhost nova_compute[297021]: 2025-10-05 10:07:08.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:08 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:08.826 2 INFO neutron.agent.securitygroups_rpc [None req-97d4cddd-c529-446f-b001-82d6e8bc8d22 ab7690a92b524e11ab2ac3dec938162a 32b7a2f31633456293e1c4169c868ef0 - - default default] Security group member updated ['bc75949e-95f2-4d6f-bfbc-251e7f7ef75d']#033[00m Oct 5 06:07:08 localhost podman[332889]: Oct 5 06:07:09 localhost podman[332889]: 2025-10-05 10:07:09.00866934 +0000 UTC m=+0.083484478 container create 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:07:09 localhost systemd[1]: Started libpod-conmon-59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2.scope. Oct 5 06:07:09 localhost podman[332889]: 2025-10-05 10:07:08.970742581 +0000 UTC m=+0.045557709 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:09 localhost systemd[1]: Started libcrun container. Oct 5 06:07:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3af3c34521e597a1ed8aa8902fff3118eaab4f3dfb87ed6b4d9a45e5b128aa2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:09 localhost podman[332889]: 2025-10-05 10:07:09.099756235 +0000 UTC m=+0.174571373 container init 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:07:09 localhost podman[332889]: 2025-10-05 10:07:09.116776817 +0000 UTC m=+0.191591965 container start 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:07:09 localhost dnsmasq[332907]: started, version 2.85 cachesize 150 Oct 5 06:07:09 localhost dnsmasq[332907]: DNS service limited to local subnets Oct 5 06:07:09 localhost dnsmasq[332907]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:09 localhost dnsmasq[332907]: warning: no upstream servers configured Oct 5 06:07:09 localhost dnsmasq-dhcp[332907]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:07:09 localhost dnsmasq[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/addn_hosts - 0 addresses Oct 5 06:07:09 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/host Oct 5 06:07:09 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/opts Oct 5 06:07:09 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:09.378 272040 INFO neutron.agent.dhcp.agent [None req-051f235b-765d-40c9-ad30-0d5048a0b75c - - - - - -] DHCP configuration for ports {'de9ce6fc-ff83-466a-88fa-c1dbc7e2797f'} is completed#033[00m Oct 5 06:07:09 localhost nova_compute[297021]: 2025-10-05 10:07:09.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost systemd[1]: tmp-crun.x9c8ea.mount: Deactivated successfully. Oct 5 06:07:10 localhost nova_compute[297021]: 2025-10-05 10:07:10.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:10.695 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:10 localhost nova_compute[297021]: 2025-10-05 10:07:10.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:10.697 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:07:10 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:10.930 2 INFO neutron.agent.securitygroups_rpc [None req-86d0464d-16db-48b0-8aaa-42e0721c5e19 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['b41d26b0-78a8-4541-9b0c-eb273b0740f6']#033[00m Oct 5 06:07:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:10.935 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:10Z, description=, device_id=78c061eb-1679-4181-813b-9e3b2ae54fef, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d2dd3711-add4-4bfd-aa12-f2be9be07196, ip_allocation=immediate, mac_address=fa:16:3e:3e:18:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:05Z, description=, dns_domain=, id=5840a30a-b372-49ca-b438-2e4c61392707, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1779246243, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37402, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1909, status=ACTIVE, subnets=['b82ce260-63a1-4957-925f-7f99f8bc1059'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:06Z, vlan_transparent=None, network_id=5840a30a-b372-49ca-b438-2e4c61392707, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1968, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:10Z on network 5840a30a-b372-49ca-b438-2e4c61392707#033[00m Oct 5 06:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:07:11 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:11.126 2 INFO neutron.agent.securitygroups_rpc [None req-e8fe7c87-18bf-46af-bac6-b9952ace8090 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:11.128 272040 INFO neutron.agent.linux.ip_lib [None req-cda87ca9-8432-457b-9ac0-87e3a8e97eaa - - - - - -] Device tap8044c375-8d cannot be used as it has no MAC address#033[00m Oct 5 06:07:11 localhost podman[332921]: 2025-10-05 10:07:11.139155586 +0000 UTC m=+0.092949177 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64) Oct 5 06:07:11 localhost nova_compute[297021]: 2025-10-05 10:07:11.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:11 localhost podman[332921]: 2025-10-05 10:07:11.199694511 +0000 UTC m=+0.153488142 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm) Oct 5 06:07:11 localhost kernel: device tap8044c375-8d entered promiscuous mode Oct 5 06:07:11 localhost NetworkManager[5981]: [1759658831.2045] manager: (tap8044c375-8d): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Oct 5 06:07:11 localhost ovn_controller[157794]: 2025-10-05T10:07:11Z|00311|binding|INFO|Claiming lport 8044c375-8db1-4bec-8fb2-f3ded3d044b5 for this chassis. Oct 5 06:07:11 localhost ovn_controller[157794]: 2025-10-05T10:07:11Z|00312|binding|INFO|8044c375-8db1-4bec-8fb2-f3ded3d044b5: Claiming unknown Oct 5 06:07:11 localhost systemd-udevd[332965]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:11 localhost nova_compute[297021]: 2025-10-05 10:07:11.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:11 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:07:11 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:11.230 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-c6959e78-4cb8-4c6d-8e35-67578237afa7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6959e78-4cb8-4c6d-8e35-67578237afa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce8f520e-1216-4143-983e-0e65613ada8a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8044c375-8db1-4bec-8fb2-f3ded3d044b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:11.232 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 8044c375-8db1-4bec-8fb2-f3ded3d044b5 in datapath c6959e78-4cb8-4c6d-8e35-67578237afa7 bound to our chassis#033[00m Oct 5 06:07:11 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:11.234 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c6959e78-4cb8-4c6d-8e35-67578237afa7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:11 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:11.235 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f17538c5-cd33-447b-a563-c2a3b7b3bb34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:11 localhost ovn_controller[157794]: 2025-10-05T10:07:11Z|00313|binding|INFO|Setting lport 8044c375-8db1-4bec-8fb2-f3ded3d044b5 ovn-installed in OVS Oct 5 06:07:11 localhost ovn_controller[157794]: 2025-10-05T10:07:11Z|00314|binding|INFO|Setting lport 8044c375-8db1-4bec-8fb2-f3ded3d044b5 up in Southbound Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost nova_compute[297021]: 2025-10-05 10:07:11.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost journal[237931]: ethtool ioctl error on tap8044c375-8d: No such device Oct 5 06:07:11 localhost dnsmasq[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/addn_hosts - 1 addresses Oct 5 06:07:11 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/host Oct 5 06:07:11 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/opts Oct 5 06:07:11 localhost podman[332942]: 2025-10-05 10:07:11.267497262 +0000 UTC m=+0.159918176 container kill 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:07:11 localhost nova_compute[297021]: 2025-10-05 10:07:11.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:11 localhost nova_compute[297021]: 2025-10-05 10:07:11.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:11.557 272040 INFO neutron.agent.dhcp.agent [None req-83e51d0c-2f8a-49fc-b055-45e77213a946 - - - - - -] DHCP configuration for ports {'d2dd3711-add4-4bfd-aa12-f2be9be07196'} is completed#033[00m Oct 5 06:07:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:11 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:11.922 2 INFO neutron.agent.securitygroups_rpc [None req-7ac4c9ba-a526-4e82-a322-0547237d9260 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['b41d26b0-78a8-4541-9b0c-eb273b0740f6']#033[00m Oct 5 06:07:12 localhost podman[333045]: Oct 5 06:07:12 localhost podman[333045]: 2025-10-05 10:07:12.260365838 +0000 UTC m=+0.091340943 container create 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:12 localhost systemd[1]: Started libpod-conmon-1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6.scope. Oct 5 06:07:12 localhost podman[333045]: 2025-10-05 10:07:12.216712616 +0000 UTC m=+0.047687761 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:12 localhost systemd[1]: tmp-crun.YAl6a2.mount: Deactivated successfully. Oct 5 06:07:12 localhost systemd[1]: Started libcrun container. Oct 5 06:07:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31824e920ec9ca38c6091f785a19ec96637bb7719a15d640d65b9a6dd82d2647/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:12 localhost podman[333045]: 2025-10-05 10:07:12.342229946 +0000 UTC m=+0.173205051 container init 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:07:12 localhost podman[333045]: 2025-10-05 10:07:12.351931507 +0000 UTC m=+0.182906622 container start 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:12 localhost dnsmasq[333063]: started, version 2.85 cachesize 150 Oct 5 06:07:12 localhost dnsmasq[333063]: DNS service limited to local subnets Oct 5 06:07:12 localhost dnsmasq[333063]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:12 localhost dnsmasq[333063]: warning: no upstream servers configured Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:12 localhost dnsmasq[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/addn_hosts - 0 addresses Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/host Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/opts Oct 5 06:07:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:12.412 272040 INFO neutron.agent.dhcp.agent [None req-cda87ca9-8432-457b-9ac0-87e3a8e97eaa - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:10Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f9c3b75c-21bf-48d2-aec1-fa74dea086f5, ip_allocation=immediate, mac_address=fa:16:3e:d6:a9:ac, name=tempest-PortsIpV6TestJSON-1872193123, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:08Z, description=, dns_domain=, id=c6959e78-4cb8-4c6d-8e35-67578237afa7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-14618135, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16466, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1949, status=ACTIVE, subnets=['d9e1dc54-94e2-4290-aa08-534c80dcfe79'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:10Z, vlan_transparent=None, network_id=c6959e78-4cb8-4c6d-8e35-67578237afa7, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=1972, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:10Z on network c6959e78-4cb8-4c6d-8e35-67578237afa7#033[00m Oct 5 06:07:12 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:12.460 2 INFO neutron.agent.securitygroups_rpc [None req-1ca8277a-d8c6-48e2-b92c-b668db68cd12 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:12.502 272040 INFO neutron.agent.dhcp.agent [None req-9dadf137-9808-49fd-ba4d-10bf76386fc3 - - - - - -] DHCP configuration for ports {'aba189b2-b4df-4b08-afab-0f238f5b3ff7'} is completed#033[00m Oct 5 06:07:12 localhost dnsmasq[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/addn_hosts - 1 addresses Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/host Oct 5 06:07:12 localhost podman[333082]: 2025-10-05 10:07:12.608081074 +0000 UTC m=+0.058200554 container kill 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/opts Oct 5 06:07:12 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:12.663 2 INFO neutron.agent.securitygroups_rpc [None req-f97f3ad6-e6d9-428a-8c53-4f6a2eec87df ab7690a92b524e11ab2ac3dec938162a 32b7a2f31633456293e1c4169c868ef0 - - default default] Security group member updated ['bc75949e-95f2-4d6f-bfbc-251e7f7ef75d']#033[00m Oct 5 06:07:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:12.754 272040 INFO neutron.agent.dhcp.agent [None req-cda87ca9-8432-457b-9ac0-87e3a8e97eaa - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:12Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d32d3809-6619-4868-9f95-806eb582388b, ip_allocation=immediate, mac_address=fa:16:3e:5b:b5:e5, name=tempest-PortsIpV6TestJSON-1033885525, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:08Z, description=, dns_domain=, id=c6959e78-4cb8-4c6d-8e35-67578237afa7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-14618135, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16466, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1949, status=ACTIVE, subnets=['d9e1dc54-94e2-4290-aa08-534c80dcfe79'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:10Z, vlan_transparent=None, network_id=c6959e78-4cb8-4c6d-8e35-67578237afa7, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=1985, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:12Z on network c6959e78-4cb8-4c6d-8e35-67578237afa7#033[00m Oct 5 06:07:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:12.941 272040 INFO neutron.agent.dhcp.agent [None req-9aa8da3b-f862-4ac8-a489-543bd9a895e6 - - - - - -] DHCP configuration for ports {'f9c3b75c-21bf-48d2-aec1-fa74dea086f5'} is completed#033[00m Oct 5 06:07:12 localhost dnsmasq[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/addn_hosts - 2 addresses Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/host Oct 5 06:07:12 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/opts Oct 5 06:07:12 localhost podman[333121]: 2025-10-05 10:07:12.951170826 +0000 UTC m=+0.065932042 container kill 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:07:13 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:13.228 272040 INFO neutron.agent.dhcp.agent [None req-24cd9f0d-cf0c-46e5-8894-23365c6de2e1 - - - - - -] DHCP configuration for ports {'d32d3809-6619-4868-9f95-806eb582388b'} is completed#033[00m Oct 5 06:07:13 localhost nova_compute[297021]: 2025-10-05 10:07:13.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:13 localhost nova_compute[297021]: 2025-10-05 10:07:13.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:13 localhost nova_compute[297021]: 2025-10-05 10:07:13.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:07:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:13.699 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:07:13 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:13.778 2 INFO neutron.agent.securitygroups_rpc [None req-08df03d9-f357-456c-9850-e7c7f1852f03 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:13 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:13.876 2 INFO neutron.agent.securitygroups_rpc [None req-f76f0cd7-1a5d-417c-a073-8a4d0637b1a6 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:14 localhost dnsmasq[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/addn_hosts - 1 addresses Oct 5 06:07:14 localhost podman[333157]: 2025-10-05 10:07:14.041747136 +0000 UTC m=+0.060062494 container kill 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:07:14 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/host Oct 5 06:07:14 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/opts Oct 5 06:07:14 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:14.053 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:10Z, description=, device_id=78c061eb-1679-4181-813b-9e3b2ae54fef, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d2dd3711-add4-4bfd-aa12-f2be9be07196, ip_allocation=immediate, mac_address=fa:16:3e:3e:18:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:05Z, description=, dns_domain=, id=5840a30a-b372-49ca-b438-2e4c61392707, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1779246243, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37402, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1909, status=ACTIVE, subnets=['b82ce260-63a1-4957-925f-7f99f8bc1059'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:06Z, vlan_transparent=None, network_id=5840a30a-b372-49ca-b438-2e4c61392707, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1968, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:10Z on network 5840a30a-b372-49ca-b438-2e4c61392707#033[00m Oct 5 06:07:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:07:14 localhost podman[333169]: 2025-10-05 10:07:14.15662627 +0000 UTC m=+0.082588058 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:07:14 localhost podman[333169]: 2025-10-05 10:07:14.165323414 +0000 UTC m=+0.091285172 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:07:14 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:07:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:14.287 2 INFO neutron.agent.securitygroups_rpc [None req-a10b7f8d-1ef0-4a56-992e-6e3aa0314a60 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:14 localhost dnsmasq[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/addn_hosts - 1 addresses Oct 5 06:07:14 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/host Oct 5 06:07:14 localhost podman[333217]: 2025-10-05 10:07:14.299894967 +0000 UTC m=+0.068178191 container kill 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:14 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/opts Oct 5 06:07:14 localhost systemd[1]: tmp-crun.qzuckA.mount: Deactivated successfully. Oct 5 06:07:14 localhost nova_compute[297021]: 2025-10-05 10:07:14.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:14 localhost nova_compute[297021]: 2025-10-05 10:07:14.423 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:07:14 localhost nova_compute[297021]: 2025-10-05 10:07:14.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:14 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:14.595 272040 INFO neutron.agent.dhcp.agent [None req-afed0c6f-6150-4fea-99a7-950185624832 - - - - - -] DHCP configuration for ports {'d2dd3711-add4-4bfd-aa12-f2be9be07196'} is completed#033[00m Oct 5 06:07:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:14.809 2 INFO neutron.agent.securitygroups_rpc [None req-209ca919-c5d6-45c0-a2a0-c1d200e5208b cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:14.958 2 INFO neutron.agent.securitygroups_rpc [None req-55de9688-4b87-4f8a-82a3-6735de35f494 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:15 localhost dnsmasq[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/addn_hosts - 0 addresses Oct 5 06:07:15 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/host Oct 5 06:07:15 localhost dnsmasq-dhcp[333063]: read /var/lib/neutron/dhcp/c6959e78-4cb8-4c6d-8e35-67578237afa7/opts Oct 5 06:07:15 localhost nova_compute[297021]: 2025-10-05 10:07:15.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:15 localhost podman[333253]: 2025-10-05 10:07:15.026666689 +0000 UTC m=+0.062163349 container kill 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:15 localhost nova_compute[297021]: 2025-10-05 10:07:15.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:15 localhost nova_compute[297021]: 2025-10-05 10:07:15.438 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:16 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:16.059 2 INFO neutron.agent.securitygroups_rpc [None req-d262c3fd-e84a-4a8c-a542-df0ebe7f723a 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:16 localhost dnsmasq[333063]: exiting on receipt of SIGTERM Oct 5 06:07:16 localhost podman[333291]: 2025-10-05 10:07:16.2434858 +0000 UTC m=+0.059498489 container kill 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:07:16 localhost systemd[1]: libpod-1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6.scope: Deactivated successfully. Oct 5 06:07:16 localhost podman[333304]: 2025-10-05 10:07:16.304732274 +0000 UTC m=+0.048770341 container died 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:07:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:16 localhost podman[333304]: 2025-10-05 10:07:16.347922363 +0000 UTC m=+0.091960360 container cleanup 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:16 localhost systemd[1]: libpod-conmon-1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6.scope: Deactivated successfully. Oct 5 06:07:16 localhost podman[333306]: 2025-10-05 10:07:16.375053432 +0000 UTC m=+0.109992474 container remove 1813d8bc6ed1ade7c426e38c56d0a30321dc9fe80e28cea336ef72c2dd9befa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6959e78-4cb8-4c6d-8e35-67578237afa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:07:16 localhost nova_compute[297021]: 2025-10-05 10:07:16.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost ovn_controller[157794]: 2025-10-05T10:07:16Z|00315|binding|INFO|Releasing lport 8044c375-8db1-4bec-8fb2-f3ded3d044b5 from this chassis (sb_readonly=0) Oct 5 06:07:16 localhost kernel: device tap8044c375-8d left promiscuous mode Oct 5 06:07:16 localhost ovn_controller[157794]: 2025-10-05T10:07:16Z|00316|binding|INFO|Setting lport 8044c375-8db1-4bec-8fb2-f3ded3d044b5 down in Southbound Oct 5 06:07:16 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:16.396 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-c6959e78-4cb8-4c6d-8e35-67578237afa7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6959e78-4cb8-4c6d-8e35-67578237afa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce8f520e-1216-4143-983e-0e65613ada8a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8044c375-8db1-4bec-8fb2-f3ded3d044b5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:16 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:16.398 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 8044c375-8db1-4bec-8fb2-f3ded3d044b5 in datapath c6959e78-4cb8-4c6d-8e35-67578237afa7 unbound from our chassis#033[00m Oct 5 06:07:16 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:16.400 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c6959e78-4cb8-4c6d-8e35-67578237afa7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:16 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:16.402 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f960b4-a3ce-488c-8357-78aad9158cce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:16 localhost nova_compute[297021]: 2025-10-05 10:07:16.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost nova_compute[297021]: 2025-10-05 10:07:16.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:16 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:16.662 272040 INFO neutron.agent.dhcp.agent [None req-befbfa03-eeb5-4d8b-b1ad-c8e586628c0a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:16 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:16.688 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:16 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:16.701 2 INFO neutron.agent.securitygroups_rpc [None req-5c25fffb-d48c-4f4b-9ba8-ffa859290a6a 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:17 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:17.003 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:17 localhost systemd[1]: var-lib-containers-storage-overlay-31824e920ec9ca38c6091f785a19ec96637bb7719a15d640d65b9a6dd82d2647-merged.mount: Deactivated successfully. Oct 5 06:07:17 localhost systemd[1]: run-netns-qdhcp\x2dc6959e78\x2d4cb8\x2d4c6d\x2d8e35\x2d67578237afa7.mount: Deactivated successfully. Oct 5 06:07:17 localhost ovn_controller[157794]: 2025-10-05T10:07:17Z|00317|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:17.389 2 INFO neutron.agent.securitygroups_rpc [None req-5331fe0c-6def-43a6-a28b-e8e96a993e48 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:17 localhost nova_compute[297021]: 2025-10-05 10:07:17.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:17.645 2 INFO neutron.agent.securitygroups_rpc [None req-9f4832ee-5d49-487c-96d5-26bed528c76f 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:17.955 2 INFO neutron.agent.securitygroups_rpc [None req-169b8896-0bc5-46ee-959a-0db3f0bfd2ea 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:18 localhost nova_compute[297021]: 2025-10-05 10:07:18.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:18 localhost nova_compute[297021]: 2025-10-05 10:07:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:18 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:18.904 2 INFO neutron.agent.securitygroups_rpc [None req-f90a23ab-2c05-43e9-a68f-6c5dbe010538 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:07:19 localhost systemd[1]: tmp-crun.QKe2ya.mount: Deactivated successfully. Oct 5 06:07:19 localhost podman[333335]: 2025-10-05 10:07:19.677796125 +0000 UTC m=+0.085898137 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:07:19 localhost podman[333335]: 2025-10-05 10:07:19.69101419 +0000 UTC m=+0.099116152 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:07:19 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:07:19 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:19.826 2 INFO neutron.agent.securitygroups_rpc [None req-b4147394-98fe-43d4-98c1-416cb7b976dd 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:19 localhost nova_compute[297021]: 2025-10-05 10:07:19.858 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:19 localhost nova_compute[297021]: 2025-10-05 10:07:19.882 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Triggering sync for uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Oct 5 06:07:19 localhost nova_compute[297021]: 2025-10-05 10:07:19.883 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:19 localhost nova_compute[297021]: 2025-10-05 10:07:19.883 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:19 localhost nova_compute[297021]: 2025-10-05 10:07:19.935 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:20 localhost nova_compute[297021]: 2025-10-05 10:07:20.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:20 localhost nova_compute[297021]: 2025-10-05 10:07:20.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:20 localhost nova_compute[297021]: 2025-10-05 10:07:20.454 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:20.469 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:20.470 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:20.470 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:21 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:21.150 2 INFO neutron.agent.securitygroups_rpc [None req-b9a17396-b928-4b5b-98f0-a472d21cec0b 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['78bf4040-6e9f-4ef0-bd57-023c16739605']#033[00m Oct 5 06:07:21 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:21.431 272040 INFO neutron.agent.linux.ip_lib [None req-8a8c84d3-445f-4b48-8abb-3d22cf025973 - - - - - -] Device tap2859237a-46 cannot be used as it has no MAC address#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.438 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost kernel: device tap2859237a-46 entered promiscuous mode Oct 5 06:07:21 localhost NetworkManager[5981]: [1759658841.4614] manager: (tap2859237a-46): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Oct 5 06:07:21 localhost podman[248506]: time="2025-10-05T10:07:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:07:21 localhost ovn_controller[157794]: 2025-10-05T10:07:21Z|00318|binding|INFO|Claiming lport 2859237a-46a1-456f-99ca-ba12f9f04302 for this chassis. Oct 5 06:07:21 localhost ovn_controller[157794]: 2025-10-05T10:07:21Z|00319|binding|INFO|2859237a-46a1-456f-99ca-ba12f9f04302: Claiming unknown Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.461 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.461 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.462 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.462 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.462 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:07:21 localhost systemd-udevd[333368]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:21.472 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-463c915a-cf01-4e69-98f4-452cdcea4bb5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-463c915a-cf01-4e69-98f4-452cdcea4bb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bf835b-d0de-407f-abca-b4ea98d5751f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2859237a-46a1-456f-99ca-ba12f9f04302) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:21.474 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 2859237a-46a1-456f-99ca-ba12f9f04302 in datapath 463c915a-cf01-4e69-98f4-452cdcea4bb5 bound to our chassis#033[00m Oct 5 06:07:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:21.475 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 463c915a-cf01-4e69-98f4-452cdcea4bb5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:21.476 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[abf8a4a1-50a5-41b8-9cb9-a4fb2f837b9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:21 localhost podman[248506]: @ - - [05/Oct/2025:10:07:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost ovn_controller[157794]: 2025-10-05T10:07:21Z|00320|binding|INFO|Setting lport 2859237a-46a1-456f-99ca-ba12f9f04302 ovn-installed in OVS Oct 5 06:07:21 localhost ovn_controller[157794]: 2025-10-05T10:07:21Z|00321|binding|INFO|Setting lport 2859237a-46a1-456f-99ca-ba12f9f04302 up in Southbound Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost podman[248506]: @ - - [05/Oct/2025:10:07:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19848 "" "Go-http-client/1.1" Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:07:21 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/917185743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.918 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.991 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:07:21 localhost nova_compute[297021]: 2025-10-05 10:07:21.991 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:07:22 localhost openstack_network_exporter[250601]: ERROR 10:07:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:07:22 localhost openstack_network_exporter[250601]: ERROR 10:07:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:22 localhost openstack_network_exporter[250601]: ERROR 10:07:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:22 localhost openstack_network_exporter[250601]: ERROR 10:07:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:07:22 localhost openstack_network_exporter[250601]: Oct 5 06:07:22 localhost openstack_network_exporter[250601]: ERROR 10:07:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:07:22 localhost openstack_network_exporter[250601]: Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.215 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.217 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11207MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.217 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.218 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:22 localhost podman[333445]: Oct 5 06:07:22 localhost podman[333445]: 2025-10-05 10:07:22.445977987 +0000 UTC m=+0.091086407 container create d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:07:22 localhost systemd[1]: Started libpod-conmon-d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b.scope. Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.490 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.491 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.491 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:07:22 localhost podman[333445]: 2025-10-05 10:07:22.40217084 +0000 UTC m=+0.047279270 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:22 localhost systemd[1]: tmp-crun.Ef97mK.mount: Deactivated successfully. Oct 5 06:07:22 localhost systemd[1]: Started libcrun container. Oct 5 06:07:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ea767ebb04343e8b8939fbacce56ab391bfc1a06f3e8c40ececec4cc57ba08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:22 localhost podman[333445]: 2025-10-05 10:07:22.537982027 +0000 UTC m=+0.183090447 container init d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:07:22 localhost podman[333445]: 2025-10-05 10:07:22.547281666 +0000 UTC m=+0.192390086 container start d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:07:22 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:22.547 2 INFO neutron.agent.securitygroups_rpc [None req-693ed2f6-d8d5-4791-b6dc-4285cd78eff9 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:22 localhost dnsmasq[333463]: started, version 2.85 cachesize 150 Oct 5 06:07:22 localhost dnsmasq[333463]: DNS service limited to local subnets Oct 5 06:07:22 localhost dnsmasq[333463]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:22 localhost dnsmasq[333463]: warning: no upstream servers configured Oct 5 06:07:22 localhost dnsmasq-dhcp[333463]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:22 localhost dnsmasq[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/addn_hosts - 0 addresses Oct 5 06:07:22 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/host Oct 5 06:07:22 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/opts Oct 5 06:07:22 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:22.675 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:21Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6a2be869-047c-4812-b533-b05d50c33b8c, ip_allocation=immediate, mac_address=fa:16:3e:3d:27:45, name=tempest-PortsIpV6TestJSON-1075879342, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:17Z, description=, dns_domain=, id=463c915a-cf01-4e69-98f4-452cdcea4bb5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-2144092581, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14523, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2022, status=ACTIVE, subnets=['f4753577-1707-4bc8-a6c1-3ace291fb395'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:20Z, vlan_transparent=None, network_id=463c915a-cf01-4e69-98f4-452cdcea4bb5, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=2038, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:22Z on network 463c915a-cf01-4e69-98f4-452cdcea4bb5#033[00m Oct 5 06:07:22 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:22.698 272040 INFO neutron.agent.dhcp.agent [None req-bdbdb9d2-675b-416d-b040-5fae419a013f - - - - - -] DHCP configuration for ports {'d8da2431-0467-4a0e-9273-94094dff7d10'} is completed#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.734 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:07:22 localhost dnsmasq[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/addn_hosts - 1 addresses Oct 5 06:07:22 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/host Oct 5 06:07:22 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/opts Oct 5 06:07:22 localhost podman[333482]: 2025-10-05 10:07:22.858479971 +0000 UTC m=+0.056829056 container kill d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.984 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:07:22 localhost nova_compute[297021]: 2025-10-05 10:07:22.985 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.027 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.071 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:07:23 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:23.089 272040 INFO neutron.agent.dhcp.agent [None req-87c24ca6-d083-4b44-9dfd-1e4e0f97cd39 - - - - - -] DHCP configuration for ports {'6a2be869-047c-4812-b533-b05d50c33b8c'} is completed#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.115 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:07:23 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:23.514 2 INFO neutron.agent.securitygroups_rpc [None req-6733de2d-2a6b-463e-8b00-30e6441f698f 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['6e6bf508-1e73-4b5c-995d-22056e152d33']#033[00m Oct 5 06:07:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:07:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1691432266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.560 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.567 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.593 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.618 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.618 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.619 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.620 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:07:23 localhost nova_compute[297021]: 2025-10-05 10:07:23.637 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:07:23 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:23.850 2 INFO neutron.agent.securitygroups_rpc [None req-3c31fa34-f2a5-4486-ae28-cc78fe0adcf8 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['6e6bf508-1e73-4b5c-995d-22056e152d33']#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:24.409 2 INFO neutron.agent.securitygroups_rpc [None req-2a8be6c9-0c7b-4153-962c-037874c56838 ba8f36397fe34869b1ddea72956496e9 e4fec76d88a14080a1ea7ef01fc37834 - - default default] Security group rule updated ['196a27b9-1ae6-48cd-8927-7a35ed2bb701']#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.620 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.620 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.621 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.841 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.842 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.842 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:07:24 localhost nova_compute[297021]: 2025-10-05 10:07:24.842 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:07:25 localhost nova_compute[297021]: 2025-10-05 10:07:25.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:25 localhost nova_compute[297021]: 2025-10-05 10:07:25.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:07:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:07:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:25.613 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:21Z, description=, device_id=e9dc33c9-941d-4c41-be43-045a027d9381, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6a2be869-047c-4812-b533-b05d50c33b8c, ip_allocation=immediate, mac_address=fa:16:3e:3d:27:45, name=tempest-PortsIpV6TestJSON-1075879342, network_id=463c915a-cf01-4e69-98f4-452cdcea4bb5, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=2038, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:23Z on network 463c915a-cf01-4e69-98f4-452cdcea4bb5#033[00m Oct 5 06:07:25 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:25.659 2 INFO neutron.agent.securitygroups_rpc [None req-b43bf185-1535-4013-a453-5ca09b3fe0fa 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['0024fe21-bb10-48ff-858e-6966a60efa16']#033[00m Oct 5 06:07:25 localhost systemd[1]: tmp-crun.JuOcFj.mount: Deactivated successfully. Oct 5 06:07:25 localhost podman[333525]: 2025-10-05 10:07:25.714697737 +0000 UTC m=+0.117129476 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:25 localhost podman[333525]: 2025-10-05 10:07:25.749987743 +0000 UTC m=+0.152419482 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 06:07:25 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:07:25 localhost systemd[1]: tmp-crun.HkqXoW.mount: Deactivated successfully. Oct 5 06:07:25 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:25.809 2 INFO neutron.agent.securitygroups_rpc [None req-460f44d0-d4bd-49be-83e4-664c20ad77c4 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['0024fe21-bb10-48ff-858e-6966a60efa16']#033[00m Oct 5 06:07:25 localhost podman[333526]: 2025-10-05 10:07:25.816682284 +0000 UTC m=+0.216517904 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:25 localhost podman[333526]: 2025-10-05 10:07:25.830697141 +0000 UTC m=+0.230532741 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 5 06:07:25 localhost dnsmasq[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/addn_hosts - 1 addresses Oct 5 06:07:25 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/host Oct 5 06:07:25 localhost podman[333571]: 2025-10-05 10:07:25.833035484 +0000 UTC m=+0.047288111 container kill d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 06:07:25 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/opts Oct 5 06:07:25 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:07:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2478177620' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2478177620' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:26 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:26.050 272040 INFO neutron.agent.dhcp.agent [None req-19cf671d-dc4e-45ff-83e7-74503eb57ab8 - - - - - -] DHCP configuration for ports {'6a2be869-047c-4812-b533-b05d50c33b8c'} is completed#033[00m Oct 5 06:07:26 localhost nova_compute[297021]: 2025-10-05 10:07:26.123 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:07:26 localhost nova_compute[297021]: 2025-10-05 10:07:26.135 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:07:26 localhost nova_compute[297021]: 2025-10-05 10:07:26.136 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:07:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:27 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:27.320 2 INFO neutron.agent.securitygroups_rpc [None req-4a584ae4-1112-479a-ab85-28ab8ab98f6e 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:27 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:27.841 2 INFO neutron.agent.securitygroups_rpc [None req-c5b4d53b-f4bc-4029-b513-8a976632fbb6 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:27 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:27.940 2 INFO neutron.agent.securitygroups_rpc [None req-2139208c-fc5a-4b10-b0b6-a160c9a6ef08 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:28 localhost sshd[333635]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:07:28 localhost systemd[1]: tmp-crun.k3UWqn.mount: Deactivated successfully. Oct 5 06:07:28 localhost dnsmasq[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/addn_hosts - 0 addresses Oct 5 06:07:28 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/host Oct 5 06:07:28 localhost dnsmasq-dhcp[333463]: read /var/lib/neutron/dhcp/463c915a-cf01-4e69-98f4-452cdcea4bb5/opts Oct 5 06:07:28 localhost podman[333663]: 2025-10-05 10:07:28.392834822 +0000 UTC m=+0.072914188 container kill d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:28 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:28.488 2 INFO neutron.agent.securitygroups_rpc [None req-84657f08-3891-427d-bba4-4d37014315fa 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:28 localhost ovn_controller[157794]: 2025-10-05T10:07:28Z|00322|binding|INFO|Releasing lport 2859237a-46a1-456f-99ca-ba12f9f04302 from this chassis (sb_readonly=0) Oct 5 06:07:28 localhost ovn_controller[157794]: 2025-10-05T10:07:28Z|00323|binding|INFO|Setting lport 2859237a-46a1-456f-99ca-ba12f9f04302 down in Southbound Oct 5 06:07:28 localhost kernel: device tap2859237a-46 left promiscuous mode Oct 5 06:07:28 localhost nova_compute[297021]: 2025-10-05 10:07:28.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:28.601 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-463c915a-cf01-4e69-98f4-452cdcea4bb5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-463c915a-cf01-4e69-98f4-452cdcea4bb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bf835b-d0de-407f-abca-b4ea98d5751f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2859237a-46a1-456f-99ca-ba12f9f04302) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:28.602 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 2859237a-46a1-456f-99ca-ba12f9f04302 in datapath 463c915a-cf01-4e69-98f4-452cdcea4bb5 unbound from our chassis#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:28.603 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 463c915a-cf01-4e69-98f4-452cdcea4bb5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:28.604 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[e4609eb6-0a29-48d2-9e74-b87c55a57c89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:28 localhost nova_compute[297021]: 2025-10-05 10:07:28.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:07:28 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:07:28 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:07:28 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:07:29 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:29.040 2 INFO neutron.agent.securitygroups_rpc [None req-8bd9ee0f-6143-4175-9c02-390480d4799e 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:29 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:29.335 2 INFO neutron.agent.securitygroups_rpc [None req-476f947b-c764-483a-82cc-ec88b0153aca 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:29.417 272040 INFO neutron.agent.linux.ip_lib [None req-aef12642-f7a5-4e07-8504-b7c947de9083 - - - - - -] Device tap1ddb8b07-4a cannot be used as it has no MAC address#033[00m Oct 5 06:07:29 localhost nova_compute[297021]: 2025-10-05 10:07:29.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:29 localhost kernel: device tap1ddb8b07-4a entered promiscuous mode Oct 5 06:07:29 localhost NetworkManager[5981]: [1759658849.4461] manager: (tap1ddb8b07-4a): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Oct 5 06:07:29 localhost ovn_controller[157794]: 2025-10-05T10:07:29Z|00324|binding|INFO|Claiming lport 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c for this chassis. Oct 5 06:07:29 localhost nova_compute[297021]: 2025-10-05 10:07:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:29 localhost ovn_controller[157794]: 2025-10-05T10:07:29Z|00325|binding|INFO|1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c: Claiming unknown Oct 5 06:07:29 localhost systemd-udevd[333737]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:29.461 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-f1038e1f-3cc7-4b34-88bb-084cd467b2b8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1038e1f-3cc7-4b34-88bb-084cd467b2b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d90999-1a72-4b58-8675-f4f29a5c5b5d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:29.463 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c in datapath f1038e1f-3cc7-4b34-88bb-084cd467b2b8 bound to our chassis#033[00m Oct 5 06:07:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:29.466 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 85e575f0-0112-47ea-ab1a-760b247711bb IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:07:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:29.466 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1038e1f-3cc7-4b34-88bb-084cd467b2b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:29.467 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd3fe49-333e-4ec6-9a64-4733e2ccab85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost ovn_controller[157794]: 2025-10-05T10:07:29Z|00326|binding|INFO|Setting lport 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c ovn-installed in OVS Oct 5 06:07:29 localhost ovn_controller[157794]: 2025-10-05T10:07:29Z|00327|binding|INFO|Setting lport 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c up in Southbound Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost nova_compute[297021]: 2025-10-05 10:07:29.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost journal[237931]: ethtool ioctl error on tap1ddb8b07-4a: No such device Oct 5 06:07:29 localhost nova_compute[297021]: 2025-10-05 10:07:29.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:29 localhost nova_compute[297021]: 2025-10-05 10:07:29.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:30 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:30.085 2 INFO neutron.agent.securitygroups_rpc [None req-ee2aff28-3b14-4064-8176-7cb1e66e643a 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:30 localhost nova_compute[297021]: 2025-10-05 10:07:30.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:30 localhost podman[333813]: 2025-10-05 10:07:30.356796232 +0000 UTC m=+0.119761547 container kill d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:07:30 localhost dnsmasq[333463]: exiting on receipt of SIGTERM Oct 5 06:07:30 localhost systemd[1]: libpod-d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b.scope: Deactivated successfully. Oct 5 06:07:30 localhost podman[333836]: Oct 5 06:07:30 localhost podman[333836]: 2025-10-05 10:07:30.411320375 +0000 UTC m=+0.097714344 container create ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 06:07:30 localhost podman[333850]: 2025-10-05 10:07:30.43831068 +0000 UTC m=+0.064368979 container died d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:07:30 localhost systemd[1]: Started libpod-conmon-ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875.scope. Oct 5 06:07:30 localhost podman[333836]: 2025-10-05 10:07:30.365223858 +0000 UTC m=+0.051617877 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:30 localhost podman[333850]: 2025-10-05 10:07:30.472844237 +0000 UTC m=+0.098902506 container cleanup d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:07:30 localhost systemd[1]: Started libcrun container. Oct 5 06:07:30 localhost systemd[1]: libpod-conmon-d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b.scope: Deactivated successfully. Oct 5 06:07:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/034e66087ff5d4d188edc28fd9c20acbfa58e0684920bf55a0e547d6cc4199e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:30 localhost podman[333836]: 2025-10-05 10:07:30.488248661 +0000 UTC m=+0.174642640 container init ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:30 localhost podman[333836]: 2025-10-05 10:07:30.496575455 +0000 UTC m=+0.182969424 container start ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:07:30 localhost dnsmasq[333881]: started, version 2.85 cachesize 150 Oct 5 06:07:30 localhost dnsmasq[333881]: DNS service limited to local subnets Oct 5 06:07:30 localhost dnsmasq[333881]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:30 localhost dnsmasq[333881]: warning: no upstream servers configured Oct 5 06:07:30 localhost dnsmasq-dhcp[333881]: DHCP, static leases only on 10.102.0.0, lease time 1d Oct 5 06:07:30 localhost dnsmasq[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/addn_hosts - 0 addresses Oct 5 06:07:30 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/host Oct 5 06:07:30 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/opts Oct 5 06:07:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:30.539 272040 INFO neutron.agent.dhcp.agent [None req-d57fb65c-4110-481a-8729-ee304127c6de - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:29Z, description=, device_id=78c061eb-1679-4181-813b-9e3b2ae54fef, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=841f359d-3a10-4292-a907-57d1db3d46c4, ip_allocation=immediate, mac_address=fa:16:3e:1e:2d:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:25Z, description=, dns_domain=, id=f1038e1f-3cc7-4b34-88bb-084cd467b2b8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1927329749, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33842, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2070, status=ACTIVE, subnets=['338d1420-9de6-4d80-ab6f-b9c729a9568c'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:27Z, vlan_transparent=None, network_id=f1038e1f-3cc7-4b34-88bb-084cd467b2b8, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2105, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:29Z on network f1038e1f-3cc7-4b34-88bb-084cd467b2b8#033[00m Oct 5 06:07:30 localhost podman[333853]: 2025-10-05 10:07:30.572537574 +0000 UTC m=+0.190637769 container remove d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-463c915a-cf01-4e69-98f4-452cdcea4bb5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:07:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:30.596 272040 INFO neutron.agent.dhcp.agent [None req-22d69346-e43e-4a28-bfa7-36c8457a7db7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:30.640 272040 INFO neutron.agent.dhcp.agent [None req-85640668-974e-43c8-975b-a00835090dbe - - - - - -] DHCP configuration for ports {'86322144-df35-4ec3-b6a1-736e4f5a01b5'} is completed#033[00m Oct 5 06:07:30 localhost dnsmasq[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/addn_hosts - 1 addresses Oct 5 06:07:30 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/host Oct 5 06:07:30 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/opts Oct 5 06:07:30 localhost podman[333902]: 2025-10-05 10:07:30.760981643 +0000 UTC m=+0.058352087 container kill ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:07:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:30.784 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:30.997 272040 INFO neutron.agent.dhcp.agent [None req-fc5d94bd-6f28-4da9-9ad7-1b83193a3d31 - - - - - -] DHCP configuration for ports {'841f359d-3a10-4292-a907-57d1db3d46c4'} is completed#033[00m Oct 5 06:07:31 localhost ovn_controller[157794]: 2025-10-05T10:07:31Z|00328|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:31 localhost nova_compute[297021]: 2025-10-05 10:07:31.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:07:31 localhost systemd[1]: tmp-crun.RVsjRx.mount: Deactivated successfully. Oct 5 06:07:31 localhost systemd[1]: var-lib-containers-storage-overlay-01ea767ebb04343e8b8939fbacce56ab391bfc1a06f3e8c40ececec4cc57ba08-merged.mount: Deactivated successfully. Oct 5 06:07:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d11e3af7a3c6877e91ee5128a7fda3a9fe388a5f5e458c9546849746f0bd680b-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:31 localhost systemd[1]: run-netns-qdhcp\x2d463c915a\x2dcf01\x2d4e69\x2d98f4\x2d452cdcea4bb5.mount: Deactivated successfully. Oct 5 06:07:31 localhost systemd[1]: tmp-crun.WdUoyX.mount: Deactivated successfully. Oct 5 06:07:31 localhost podman[333924]: 2025-10-05 10:07:31.454923855 +0000 UTC m=+0.112163413 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:07:31 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:31.463 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:29Z, description=, device_id=78c061eb-1679-4181-813b-9e3b2ae54fef, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=841f359d-3a10-4292-a907-57d1db3d46c4, ip_allocation=immediate, mac_address=fa:16:3e:1e:2d:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:25Z, description=, dns_domain=, id=f1038e1f-3cc7-4b34-88bb-084cd467b2b8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1927329749, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33842, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2070, status=ACTIVE, subnets=['338d1420-9de6-4d80-ab6f-b9c729a9568c'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:27Z, vlan_transparent=None, network_id=f1038e1f-3cc7-4b34-88bb-084cd467b2b8, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2105, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:07:29Z on network f1038e1f-3cc7-4b34-88bb-084cd467b2b8#033[00m Oct 5 06:07:31 localhost podman[333924]: 2025-10-05 10:07:31.490756837 +0000 UTC m=+0.147996365 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:31 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:07:31 localhost dnsmasq[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/addn_hosts - 1 addresses Oct 5 06:07:31 localhost podman[333958]: 2025-10-05 10:07:31.681431536 +0000 UTC m=+0.058627005 container kill ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:31 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/host Oct 5 06:07:31 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/opts Oct 5 06:07:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:07:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:07:31 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:31.840 2 INFO neutron.agent.securitygroups_rpc [None req-a750740a-7bbf-4d80-b3fd-3791018d3fe5 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['081cd962-3c9a-4af0-bdfc-f1ce8d3a3fe1']#033[00m Oct 5 06:07:31 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:31.948 272040 INFO neutron.agent.dhcp.agent [None req-854891c3-5af3-4986-850c-6c4b758ac778 - - - - - -] DHCP configuration for ports {'841f359d-3a10-4292-a907-57d1db3d46c4'} is completed#033[00m Oct 5 06:07:32 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:07:33 localhost nova_compute[297021]: 2025-10-05 10:07:33.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:34 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:34.601 2 INFO neutron.agent.securitygroups_rpc [None req-7c15e948-0756-4894-9401-16ac3bd80e33 b6eee72daf174482a09538159bfd443d f34fdb6c55c946fcb8470c230a141a31 - - default default] Security group member updated ['99deb70b-a280-4904-b641-029f0268e21a']#033[00m Oct 5 06:07:35 localhost nova_compute[297021]: 2025-10-05 10:07:35.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:35 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:35.501 2 INFO neutron.agent.securitygroups_rpc [None req-8bc4a948-b240-4615-bb13-976b18c48ca9 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['5ec48d99-9389-4c99-9e3e-c175128cae07']#033[00m Oct 5 06:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:07:35 localhost podman[333981]: 2025-10-05 10:07:35.678157213 +0000 UTC m=+0.082987199 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:35 localhost podman[333981]: 2025-10-05 10:07:35.723907932 +0000 UTC m=+0.128737918 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller) Oct 5 06:07:35 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:07:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:37 localhost sshd[334008]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:07:38 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:38.181 2 INFO neutron.agent.securitygroups_rpc [None req-84848f06-9748-4b0b-af05-ee277d570d7d b6eee72daf174482a09538159bfd443d f34fdb6c55c946fcb8470c230a141a31 - - default default] Security group member updated ['99deb70b-a280-4904-b641-029f0268e21a']#033[00m Oct 5 06:07:38 localhost nova_compute[297021]: 2025-10-05 10:07:38.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:07:38 localhost systemd[1]: tmp-crun.bSuKuJ.mount: Deactivated successfully. Oct 5 06:07:38 localhost podman[334011]: 2025-10-05 10:07:38.682278929 +0000 UTC m=+0.093368868 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:07:38 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:38.715 2 INFO neutron.agent.securitygroups_rpc [None req-794b5ac2-514f-4cf2-8f6d-60c082ae4422 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72acb41c-3515-430f-8b0d-c6b4b4c48929', '5ec48d99-9389-4c99-9e3e-c175128cae07']#033[00m Oct 5 06:07:38 localhost podman[334011]: 2025-10-05 10:07:38.723911327 +0000 UTC m=+0.135001286 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 06:07:38 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:07:39 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:39.182 2 INFO neutron.agent.securitygroups_rpc [None req-fbb1a0f1-0d63-49d2-920b-22d6e8214efb cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72acb41c-3515-430f-8b0d-c6b4b4c48929']#033[00m Oct 5 06:07:40 localhost nova_compute[297021]: 2025-10-05 10:07:40.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:41 localhost nova_compute[297021]: 2025-10-05 10:07:41.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:07:41 localhost systemd[1]: tmp-crun.lZjKWR.mount: Deactivated successfully. Oct 5 06:07:41 localhost podman[334030]: 2025-10-05 10:07:41.678224825 +0000 UTC m=+0.089062472 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:07:41 localhost podman[334030]: 2025-10-05 10:07:41.693964098 +0000 UTC m=+0.104801795 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, vendor=Red Hat, Inc.) Oct 5 06:07:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:41 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:07:42 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:42.994 2 INFO neutron.agent.securitygroups_rpc [None req-7ace4b82-0cc4-4084-83e7-ac763f44855c cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['7afb8f56-9b39-4524-946c-98857ae057ed']#033[00m Oct 5 06:07:43 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:43.073 2 INFO neutron.agent.securitygroups_rpc [None req-df770e99-c016-4090-bd54-f5a4e9cc943d 70cea673858c4ca7a047572a65bd009d ca6cedc436004b98b4d6a7b8317517ef - - default default] Security group member updated ['b57cacfc-2c28-480a-b1eb-ffe3c939d72c']#033[00m Oct 5 06:07:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:07:44 localhost systemd[1]: tmp-crun.ARafFT.mount: Deactivated successfully. Oct 5 06:07:44 localhost podman[334051]: 2025-10-05 10:07:44.692594156 +0000 UTC m=+0.097284193 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:07:44 localhost podman[334051]: 2025-10-05 10:07:44.72773306 +0000 UTC m=+0.132423057 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:07:44 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:07:45 localhost nova_compute[297021]: 2025-10-05 10:07:45.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:45.435 2 INFO neutron.agent.securitygroups_rpc [None req-1b97a73b-3735-424d-b9eb-71862dbcdb8c cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['8348d8b9-8edc-450f-bfc6-3a50148b401f', '82b4f83b-cc5b-4542-abf5-d19bf0c21453', '7afb8f56-9b39-4524-946c-98857ae057ed']#033[00m Oct 5 06:07:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:46.148 2 INFO neutron.agent.securitygroups_rpc [None req-049b9b58-6a79-440b-bc0e-124f0a306f57 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['8348d8b9-8edc-450f-bfc6-3a50148b401f', '82b4f83b-cc5b-4542-abf5-d19bf0c21453']#033[00m Oct 5 06:07:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:46 localhost ovn_controller[157794]: 2025-10-05T10:07:46Z|00329|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:46 localhost nova_compute[297021]: 2025-10-05 10:07:46.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:46 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4294956886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:46 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4294956886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:47 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:47.584 2 INFO neutron.agent.securitygroups_rpc [None req-f98984bf-27f2-402b-bf53-d253fe79e1e2 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e149 do_prune osdmap full prune enabled Oct 5 06:07:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e150 e150: 6 total, 6 up, 6 in Oct 5 06:07:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e150: 6 total, 6 up, 6 in Oct 5 06:07:48 localhost dnsmasq[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/addn_hosts - 0 addresses Oct 5 06:07:48 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/host Oct 5 06:07:48 localhost dnsmasq-dhcp[333881]: read /var/lib/neutron/dhcp/f1038e1f-3cc7-4b34-88bb-084cd467b2b8/opts Oct 5 06:07:48 localhost podman[334090]: 2025-10-05 10:07:48.299017605 +0000 UTC m=+0.061907964 container kill ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:07:48 localhost systemd[1]: tmp-crun.aki1Zh.mount: Deactivated successfully. Oct 5 06:07:48 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:48.359 2 INFO neutron.agent.securitygroups_rpc [None req-cc4f7d40-abaa-4d93-8603-d8c24366c18f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:48 localhost ovn_controller[157794]: 2025-10-05T10:07:48Z|00330|binding|INFO|Releasing lport 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c from this chassis (sb_readonly=0) Oct 5 06:07:48 localhost ovn_controller[157794]: 2025-10-05T10:07:48Z|00331|binding|INFO|Setting lport 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c down in Southbound Oct 5 06:07:48 localhost kernel: device tap1ddb8b07-4a left promiscuous mode Oct 5 06:07:48 localhost nova_compute[297021]: 2025-10-05 10:07:48.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:48.529 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-f1038e1f-3cc7-4b34-88bb-084cd467b2b8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1038e1f-3cc7-4b34-88bb-084cd467b2b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d90999-1a72-4b58-8675-f4f29a5c5b5d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:48.531 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1ddb8b07-4a4c-4cb9-9b6f-d14856aac41c in datapath f1038e1f-3cc7-4b34-88bb-084cd467b2b8 unbound from our chassis#033[00m Oct 5 06:07:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:48.534 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1038e1f-3cc7-4b34-88bb-084cd467b2b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:48.535 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4ec64d-1374-426e-8121-bbcdec92f0c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:48 localhost nova_compute[297021]: 2025-10-05 10:07:48.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:48 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:48.984 2 INFO neutron.agent.securitygroups_rpc [None req-d4fd844c-5319-47bb-bde6-6036c467bd6b cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:49 localhost sshd[334135]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:07:49 localhost dnsmasq[333881]: exiting on receipt of SIGTERM Oct 5 06:07:49 localhost systemd[1]: libpod-ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875.scope: Deactivated successfully. Oct 5 06:07:49 localhost podman[334129]: 2025-10-05 10:07:49.54734892 +0000 UTC m=+0.049153401 container kill ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:07:49 localhost podman[334145]: 2025-10-05 10:07:49.615773177 +0000 UTC m=+0.059107438 container died ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:49 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:49.644 2 INFO neutron.agent.securitygroups_rpc [None req-a1813b30-646a-4cbf-957a-44cbfadeea3e 70cea673858c4ca7a047572a65bd009d ca6cedc436004b98b4d6a7b8317517ef - - default default] Security group member updated ['b57cacfc-2c28-480a-b1eb-ffe3c939d72c']#033[00m Oct 5 06:07:49 localhost podman[334145]: 2025-10-05 10:07:49.652262527 +0000 UTC m=+0.095596728 container cleanup ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:49 localhost systemd[1]: libpod-conmon-ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875.scope: Deactivated successfully. Oct 5 06:07:49 localhost podman[334152]: 2025-10-05 10:07:49.676701052 +0000 UTC m=+0.104843065 container remove ea1cd9bdf9a164e6338e034d799f458062ccda650a074efad7cdf5451d063875 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f1038e1f-3cc7-4b34-88bb-084cd467b2b8, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:49 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:49.806 2 INFO neutron.agent.securitygroups_rpc [None req-a7fc5318-33ca-4417-b8ae-a9518ec1d261 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e150 do_prune osdmap full prune enabled Oct 5 06:07:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:50.019 272040 INFO neutron.agent.dhcp.agent [None req-acce102e-98b6-4f8e-9d35-efb692659660 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:50.020 272040 INFO neutron.agent.dhcp.agent [None req-acce102e-98b6-4f8e-9d35-efb692659660 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e151 e151: 6 total, 6 up, 6 in Oct 5 06:07:50 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e151: 6 total, 6 up, 6 in Oct 5 06:07:50 localhost nova_compute[297021]: 2025-10-05 10:07:50.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:50.289 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:07:50 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:50.350 2 INFO neutron.agent.securitygroups_rpc [None req-e3301919-1a61-48e0-baea-7f35439c8826 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:50 localhost podman[334174]: 2025-10-05 10:07:50.424790707 +0000 UTC m=+0.083994036 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:07:50 localhost podman[334174]: 2025-10-05 10:07:50.433130271 +0000 UTC m=+0.092333620 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:07:50 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:07:50 localhost systemd[1]: var-lib-containers-storage-overlay-034e66087ff5d4d188edc28fd9c20acbfa58e0684920bf55a0e547d6cc4199e9-merged.mount: Deactivated successfully. Oct 5 06:07:50 localhost systemd[1]: run-netns-qdhcp\x2df1038e1f\x2d3cc7\x2d4b34\x2d88bb\x2d084cd467b2b8.mount: Deactivated successfully. Oct 5 06:07:50 localhost ovn_controller[157794]: 2025-10-05T10:07:50Z|00332|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:50 localhost nova_compute[297021]: 2025-10-05 10:07:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e151 do_prune osdmap full prune enabled Oct 5 06:07:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e152 e152: 6 total, 6 up, 6 in Oct 5 06:07:51 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e152: 6 total, 6 up, 6 in Oct 5 06:07:51 localhost podman[248506]: time="2025-10-05T10:07:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:07:51 localhost podman[248506]: @ - - [05/Oct/2025:10:07:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:07:51 localhost podman[248506]: @ - - [05/Oct/2025:10:07:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19853 "" "Go-http-client/1.1" Oct 5 06:07:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:51 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:51.774 2 INFO neutron.agent.securitygroups_rpc [None req-baee9750-4fae-4383-899a-441f63f595a9 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:52 localhost openstack_network_exporter[250601]: ERROR 10:07:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:07:52 localhost openstack_network_exporter[250601]: ERROR 10:07:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:07:52 localhost openstack_network_exporter[250601]: Oct 5 06:07:52 localhost openstack_network_exporter[250601]: ERROR 10:07:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:07:52 localhost openstack_network_exporter[250601]: Oct 5 06:07:52 localhost openstack_network_exporter[250601]: ERROR 10:07:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:52 localhost openstack_network_exporter[250601]: ERROR 10:07:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:52 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:52.214 2 INFO neutron.agent.securitygroups_rpc [None req-9451ac2e-b360-457e-98f5-d6f0be03282d f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:52 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3019752140' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:52 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3019752140' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e152 do_prune osdmap full prune enabled Oct 5 06:07:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e153 e153: 6 total, 6 up, 6 in Oct 5 06:07:52 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e153: 6 total, 6 up, 6 in Oct 5 06:07:52 localhost ovn_controller[157794]: 2025-10-05T10:07:52Z|00333|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:52 localhost nova_compute[297021]: 2025-10-05 10:07:52.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:53 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:53.499 2 INFO neutron.agent.securitygroups_rpc [None req-2fc340b3-3c1a-47ed-be01-f4e92de921f9 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:53 localhost dnsmasq[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/addn_hosts - 0 addresses Oct 5 06:07:53 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/host Oct 5 06:07:53 localhost podman[334217]: 2025-10-05 10:07:53.668896837 +0000 UTC m=+0.060228788 container kill 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:07:53 localhost dnsmasq-dhcp[332907]: read /var/lib/neutron/dhcp/5840a30a-b372-49ca-b438-2e4c61392707/opts Oct 5 06:07:53 localhost ovn_controller[157794]: 2025-10-05T10:07:53Z|00334|binding|INFO|Releasing lport 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd from this chassis (sb_readonly=0) Oct 5 06:07:53 localhost ovn_controller[157794]: 2025-10-05T10:07:53Z|00335|binding|INFO|Setting lport 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd down in Southbound Oct 5 06:07:53 localhost nova_compute[297021]: 2025-10-05 10:07:53.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:53 localhost kernel: device tap2ed6a3ea-de left promiscuous mode Oct 5 06:07:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:53.900 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-5840a30a-b372-49ca-b438-2e4c61392707', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5840a30a-b372-49ca-b438-2e4c61392707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=434434f2-bb61-4db4-b978-cee9c27d0ab8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2ed6a3ea-defe-469a-bf8a-11d74d78d6fd) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:53.903 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 2ed6a3ea-defe-469a-bf8a-11d74d78d6fd in datapath 5840a30a-b372-49ca-b438-2e4c61392707 unbound from our chassis#033[00m Oct 5 06:07:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:53.905 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5840a30a-b372-49ca-b438-2e4c61392707, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:07:53.907 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e200eb-f44e-4e75-98b4-916b70ca7632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:53 localhost nova_compute[297021]: 2025-10-05 10:07:53.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:53 localhost nova_compute[297021]: 2025-10-05 10:07:53.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:54 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:54.066 2 INFO neutron.agent.securitygroups_rpc [None req-6a71a440-fbf3-4537-abe8-a7eff2851fe0 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:54 localhost dnsmasq[332907]: exiting on receipt of SIGTERM Oct 5 06:07:54 localhost podman[334256]: 2025-10-05 10:07:54.385465776 +0000 UTC m=+0.064305988 container kill 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:07:54 localhost systemd[1]: libpod-59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2.scope: Deactivated successfully. Oct 5 06:07:54 localhost podman[334270]: 2025-10-05 10:07:54.458858636 +0000 UTC m=+0.056066666 container died 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:07:54 localhost podman[334270]: 2025-10-05 10:07:54.494097123 +0000 UTC m=+0.091305123 container cleanup 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:54 localhost systemd[1]: libpod-conmon-59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2.scope: Deactivated successfully. Oct 5 06:07:54 localhost podman[334271]: 2025-10-05 10:07:54.531117516 +0000 UTC m=+0.124571365 container remove 59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5840a30a-b372-49ca-b438-2e4c61392707, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:54 localhost systemd[1]: var-lib-containers-storage-overlay-c3af3c34521e597a1ed8aa8902fff3118eaab4f3dfb87ed6b4d9a45e5b128aa2-merged.mount: Deactivated successfully. Oct 5 06:07:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59e1af6bd726c91f0f365ec71fc17213b3dfea55a508a3a1fe7d83f6d98b31a2-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:54.737 272040 INFO neutron.agent.dhcp.agent [None req-eb36b1f6-0ade-4704-9245-88490cff1514 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:54 localhost systemd[1]: run-netns-qdhcp\x2d5840a30a\x2db372\x2d49ca\x2db438\x2d2e4c61392707.mount: Deactivated successfully. Oct 5 06:07:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:54.845 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e153 do_prune osdmap full prune enabled Oct 5 06:07:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e154 e154: 6 total, 6 up, 6 in Oct 5 06:07:55 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e154: 6 total, 6 up, 6 in Oct 5 06:07:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:55.207 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:55 localhost nova_compute[297021]: 2025-10-05 10:07:55.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:55 localhost nova_compute[297021]: 2025-10-05 10:07:55.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:55 localhost ovn_controller[157794]: 2025-10-05T10:07:55Z|00336|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:55 localhost nova_compute[297021]: 2025-10-05 10:07:55.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:55 localhost ovn_controller[157794]: 2025-10-05T10:07:55Z|00337|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:07:55 localhost nova_compute[297021]: 2025-10-05 10:07:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:55 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:55.919 2 INFO neutron.agent.securitygroups_rpc [None req-1d269ddf-7645-4d24-b33d-d166450d0e87 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:56 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:56.004 2 INFO neutron.agent.securitygroups_rpc [None req-03f5ff11-97cf-4122-b429-a6f11423915d f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e154 do_prune osdmap full prune enabled Oct 5 06:07:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e155 e155: 6 total, 6 up, 6 in Oct 5 06:07:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e155: 6 total, 6 up, 6 in Oct 5 06:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:07:56 localhost systemd[1]: tmp-crun.VnWuBi.mount: Deactivated successfully. Oct 5 06:07:56 localhost podman[334301]: 2025-10-05 10:07:56.675114501 +0000 UTC m=+0.080452072 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:07:56 localhost podman[334300]: 2025-10-05 10:07:56.687479032 +0000 UTC m=+0.091614211 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:07:56 localhost podman[334300]: 2025-10-05 10:07:56.702748312 +0000 UTC m=+0.106883481 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:07:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e155 do_prune osdmap full prune enabled Oct 5 06:07:56 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:07:56 localhost podman[334301]: 2025-10-05 10:07:56.718901056 +0000 UTC m=+0.124238597 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e156 e156: 6 total, 6 up, 6 in Oct 5 06:07:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e156: 6 total, 6 up, 6 in Oct 5 06:07:56 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:07:57 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:57.056 2 INFO neutron.agent.securitygroups_rpc [None req-b7385107-6494-4a61-8dc0-25a9aaea9b2e f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:58 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:58.083 2 INFO neutron.agent.securitygroups_rpc [None req-53177ba6-7660-4de8-8fe1-b74b9593f398 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e156 do_prune osdmap full prune enabled Oct 5 06:07:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e157 e157: 6 total, 6 up, 6 in Oct 5 06:07:58 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e157: 6 total, 6 up, 6 in Oct 5 06:07:58 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:58.171 2 INFO neutron.agent.securitygroups_rpc [None req-53177ba6-7660-4de8-8fe1-b74b9593f398 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:58 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:58.560 2 INFO neutron.agent.securitygroups_rpc [None req-97e921cc-a522-4545-a96d-289c7bae0092 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:58 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:58.776 2 INFO neutron.agent.securitygroups_rpc [None req-f66b887a-8f8a-485a-bab5-6f8586c7acf0 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e157 do_prune osdmap full prune enabled Oct 5 06:07:59 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e158 e158: 6 total, 6 up, 6 in Oct 5 06:07:59 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e158: 6 total, 6 up, 6 in Oct 5 06:07:59 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:59.233 2 INFO neutron.agent.securitygroups_rpc [None req-0763796b-a68e-4290-9057-fc9d8ab18a1b f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:59 localhost neutron_sriov_agent[264984]: 2025-10-05 10:07:59.720 2 INFO neutron.agent.securitygroups_rpc [None req-b1991b49-5b7a-43f4-b492-528ba8afa1c6 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:07:59.748 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e158 do_prune osdmap full prune enabled Oct 5 06:08:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e159 e159: 6 total, 6 up, 6 in Oct 5 06:08:00 localhost nova_compute[297021]: 2025-10-05 10:08:00.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:00 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e159: 6 total, 6 up, 6 in Oct 5 06:08:00 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:00.642 2 INFO neutron.agent.securitygroups_rpc [None req-e18d5d70-b13c-4e2c-a6cd-c80f95b5c7d7 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:00 localhost ovn_controller[157794]: 2025-10-05T10:08:00Z|00338|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:08:00 localhost nova_compute[297021]: 2025-10-05 10:08:00.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e159 do_prune osdmap full prune enabled Oct 5 06:08:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e160 e160: 6 total, 6 up, 6 in Oct 5 06:08:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e160: 6 total, 6 up, 6 in Oct 5 06:08:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:08:01 localhost podman[334338]: 2025-10-05 10:08:01.68655905 +0000 UTC m=+0.083528393 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:08:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e160 do_prune osdmap full prune enabled Oct 5 06:08:01 localhost podman[334338]: 2025-10-05 10:08:01.715878898 +0000 UTC m=+0.112848301 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e161 e161: 6 total, 6 up, 6 in Oct 5 06:08:01 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:08:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e161: 6 total, 6 up, 6 in Oct 5 06:08:01 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:01.797 2 INFO neutron.agent.securitygroups_rpc [None req-7b95b62b-bd84-486c-bb4f-6a16b0729e5e f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e161 do_prune osdmap full prune enabled Oct 5 06:08:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e162 e162: 6 total, 6 up, 6 in Oct 5 06:08:03 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e162: 6 total, 6 up, 6 in Oct 5 06:08:03 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:03.281 272040 INFO neutron.agent.linux.ip_lib [None req-4bc2dbed-da6a-4ad3-8420-7d1cc8483464 - - - - - -] Device tap1c8b77c3-61 cannot be used as it has no MAC address#033[00m Oct 5 06:08:03 localhost nova_compute[297021]: 2025-10-05 10:08:03.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:03 localhost kernel: device tap1c8b77c3-61 entered promiscuous mode Oct 5 06:08:03 localhost NetworkManager[5981]: [1759658883.3243] manager: (tap1c8b77c3-61): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Oct 5 06:08:03 localhost nova_compute[297021]: 2025-10-05 10:08:03.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:03 localhost ovn_controller[157794]: 2025-10-05T10:08:03Z|00339|binding|INFO|Claiming lport 1c8b77c3-61e2-40e1-8015-ce94272a50e8 for this chassis. Oct 5 06:08:03 localhost ovn_controller[157794]: 2025-10-05T10:08:03Z|00340|binding|INFO|1c8b77c3-61e2-40e1-8015-ce94272a50e8: Claiming unknown Oct 5 06:08:03 localhost systemd-udevd[334367]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:08:03 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:03.335 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-7e32d78b-46a6-4f3d-b521-c5ced32d4eae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e32d78b-46a6-4f3d-b521-c5ced32d4eae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d4b4285-ff80-41d8-8e17-a98712416768, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1c8b77c3-61e2-40e1-8015-ce94272a50e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:03 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:03.340 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1c8b77c3-61e2-40e1-8015-ce94272a50e8 in datapath 7e32d78b-46a6-4f3d-b521-c5ced32d4eae bound to our chassis#033[00m Oct 5 06:08:03 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:03.341 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e32d78b-46a6-4f3d-b521-c5ced32d4eae or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:08:03 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:03.342 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[04c3d77e-d137-4799-b14a-65b5376297dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost ovn_controller[157794]: 2025-10-05T10:08:03Z|00341|binding|INFO|Setting lport 1c8b77c3-61e2-40e1-8015-ce94272a50e8 ovn-installed in OVS Oct 5 06:08:03 localhost ovn_controller[157794]: 2025-10-05T10:08:03Z|00342|binding|INFO|Setting lport 1c8b77c3-61e2-40e1-8015-ce94272a50e8 up in Southbound Oct 5 06:08:03 localhost nova_compute[297021]: 2025-10-05 10:08:03.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost journal[237931]: ethtool ioctl error on tap1c8b77c3-61: No such device Oct 5 06:08:03 localhost nova_compute[297021]: 2025-10-05 10:08:03.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:03 localhost nova_compute[297021]: 2025-10-05 10:08:03.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:04 localhost podman[334437]: Oct 5 06:08:04 localhost podman[334437]: 2025-10-05 10:08:04.327131945 +0000 UTC m=+0.092148885 container create eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:08:04 localhost systemd[1]: Started libpod-conmon-eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd.scope. Oct 5 06:08:04 localhost podman[334437]: 2025-10-05 10:08:04.282935189 +0000 UTC m=+0.047952169 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:04 localhost systemd[1]: Started libcrun container. Oct 5 06:08:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d837bec5ba640031664b6ada5385b7334522706d9c705a885033aaf01e9c3da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:04 localhost podman[334437]: 2025-10-05 10:08:04.404722819 +0000 UTC m=+0.169739759 container init eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:08:04 localhost podman[334437]: 2025-10-05 10:08:04.418071707 +0000 UTC m=+0.183088657 container start eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:04 localhost dnsmasq[334456]: started, version 2.85 cachesize 150 Oct 5 06:08:04 localhost dnsmasq[334456]: DNS service limited to local subnets Oct 5 06:08:04 localhost dnsmasq[334456]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:04 localhost dnsmasq[334456]: warning: no upstream servers configured Oct 5 06:08:04 localhost dnsmasq-dhcp[334456]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:04 localhost dnsmasq[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/addn_hosts - 0 addresses Oct 5 06:08:04 localhost dnsmasq-dhcp[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/host Oct 5 06:08:04 localhost dnsmasq-dhcp[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/opts Oct 5 06:08:04 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:04.630 272040 INFO neutron.agent.dhcp.agent [None req-05d3fb96-a325-4313-990d-ded15f0e5c9f - - - - - -] DHCP configuration for ports {'6b55d8b3-5caf-4763-bd05-6971aefcfb71'} is completed#033[00m Oct 5 06:08:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:05.146 2 INFO neutron.agent.securitygroups_rpc [None req-1aee9f1b-2dcb-448d-9738-dfd913b37f5e 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:05 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:05.179 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a95de611-c849-4033-aef2-fb530f6a74ce, ip_allocation=immediate, mac_address=fa:16:3e:60:e9:41, name=tempest-PortsTestJSON-1824920232, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:08:00Z, description=, dns_domain=, id=7e32d78b-46a6-4f3d-b521-c5ced32d4eae, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1933362961, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41068, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2261, status=ACTIVE, subnets=['cdee8130-13aa-4d87-9a1d-11a93bcc88fc'], tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:02Z, vlan_transparent=None, network_id=7e32d78b-46a6-4f3d-b521-c5ced32d4eae, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['403ef325-843a-42e9-9412-a4f8fc546f92'], standard_attr_id=2295, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:04Z on network 7e32d78b-46a6-4f3d-b521-c5ced32d4eae#033[00m Oct 5 06:08:05 localhost nova_compute[297021]: 2025-10-05 10:08:05.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:05.232 2 INFO neutron.agent.securitygroups_rpc [None req-cde486e8-5343-463f-82b5-fd49f79dd99f 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['ce138a34-e48c-4963-b3e5-a739b99229fc']#033[00m Oct 5 06:08:05 localhost dnsmasq[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/addn_hosts - 1 addresses Oct 5 06:08:05 localhost podman[334474]: 2025-10-05 10:08:05.428927687 +0000 UTC m=+0.063804584 container kill eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:05 localhost dnsmasq-dhcp[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/host Oct 5 06:08:05 localhost dnsmasq-dhcp[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/opts Oct 5 06:08:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:05.679 2 INFO neutron.agent.securitygroups_rpc [None req-defe0441-322c-41c6-b4e1-52a9b37ef436 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['ce138a34-e48c-4963-b3e5-a739b99229fc']#033[00m Oct 5 06:08:05 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:05.690 272040 INFO neutron.agent.dhcp.agent [None req-98f2fc1d-1637-4d22-8008-1afdfe31f81f - - - - - -] DHCP configuration for ports {'a95de611-c849-4033-aef2-fb530f6a74ce'} is completed#033[00m Oct 5 06:08:05 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:05.761 2 INFO neutron.agent.securitygroups_rpc [None req-f76a30a8-838b-4e9c-b656-b89caacfd59e 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:06 localhost dnsmasq[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/addn_hosts - 0 addresses Oct 5 06:08:06 localhost dnsmasq-dhcp[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/host Oct 5 06:08:06 localhost podman[334512]: 2025-10-05 10:08:06.003503774 +0000 UTC m=+0.057702590 container kill eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:08:06 localhost dnsmasq-dhcp[334456]: read /var/lib/neutron/dhcp/7e32d78b-46a6-4f3d-b521-c5ced32d4eae/opts Oct 5 06:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:08:06 localhost podman[334525]: 2025-10-05 10:08:06.118512723 +0000 UTC m=+0.083943015 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:08:06 localhost podman[334525]: 2025-10-05 10:08:06.16277472 +0000 UTC m=+0.128205022 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:08:06 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:08:06 localhost dnsmasq[334456]: exiting on receipt of SIGTERM Oct 5 06:08:06 localhost podman[334571]: 2025-10-05 10:08:06.547839679 +0000 UTC m=+0.059076317 container kill eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:06 localhost systemd[1]: libpod-eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd.scope: Deactivated successfully. Oct 5 06:08:06 localhost podman[334585]: 2025-10-05 10:08:06.620020337 +0000 UTC m=+0.053849077 container died eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:06 localhost podman[334585]: 2025-10-05 10:08:06.654239716 +0000 UTC m=+0.088068426 container cleanup eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:08:06 localhost systemd[1]: libpod-conmon-eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd.scope: Deactivated successfully. Oct 5 06:08:06 localhost podman[334586]: 2025-10-05 10:08:06.697012424 +0000 UTC m=+0.128877351 container remove eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7e32d78b-46a6-4f3d-b521-c5ced32d4eae, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e162 do_prune osdmap full prune enabled Oct 5 06:08:06 localhost ovn_controller[157794]: 2025-10-05T10:08:06Z|00343|binding|INFO|Releasing lport 1c8b77c3-61e2-40e1-8015-ce94272a50e8 from this chassis (sb_readonly=0) Oct 5 06:08:06 localhost ovn_controller[157794]: 2025-10-05T10:08:06Z|00344|binding|INFO|Setting lport 1c8b77c3-61e2-40e1-8015-ce94272a50e8 down in Southbound Oct 5 06:08:06 localhost kernel: device tap1c8b77c3-61 left promiscuous mode Oct 5 06:08:06 localhost nova_compute[297021]: 2025-10-05 10:08:06.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e163 e163: 6 total, 6 up, 6 in Oct 5 06:08:06 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e163: 6 total, 6 up, 6 in Oct 5 06:08:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:06.764 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-7e32d78b-46a6-4f3d-b521-c5ced32d4eae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e32d78b-46a6-4f3d-b521-c5ced32d4eae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d4b4285-ff80-41d8-8e17-a98712416768, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1c8b77c3-61e2-40e1-8015-ce94272a50e8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:06.766 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1c8b77c3-61e2-40e1-8015-ce94272a50e8 in datapath 7e32d78b-46a6-4f3d-b521-c5ced32d4eae unbound from our chassis#033[00m Oct 5 06:08:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:06.768 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e32d78b-46a6-4f3d-b521-c5ced32d4eae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:06.770 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[016e24cf-41dc-485d-abaa-92c8dd9dc163]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:06 localhost nova_compute[297021]: 2025-10-05 10:08:06.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:06 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:06.984 272040 INFO neutron.agent.dhcp.agent [None req-9de926b9-30a1-41d6-a24a-8e91735df161 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:06 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:06.985 272040 INFO neutron.agent.dhcp.agent [None req-9de926b9-30a1-41d6-a24a-8e91735df161 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:07.069 2 INFO neutron.agent.securitygroups_rpc [None req-9f58c4be-87fa-4490-a743-6e6879b0c41b 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:07 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:07.285 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:07.322 2 INFO neutron.agent.securitygroups_rpc [None req-9850b512-ef78-4a81-a9e5-043b1902da93 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:07.462 2 INFO neutron.agent.securitygroups_rpc [None req-528b96f1-eb86-4d53-a25e-09c868ee3534 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:07 localhost ovn_controller[157794]: 2025-10-05T10:08:07Z|00345|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:08:07 localhost systemd[1]: var-lib-containers-storage-overlay-9d837bec5ba640031664b6ada5385b7334522706d9c705a885033aaf01e9c3da-merged.mount: Deactivated successfully. Oct 5 06:08:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb86b4f7bb93e7a1c3755728032e626d665bbadb1dd53658d572c392dcee7bfd-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:07 localhost systemd[1]: run-netns-qdhcp\x2d7e32d78b\x2d46a6\x2d4f3d\x2db521\x2dc5ced32d4eae.mount: Deactivated successfully. Oct 5 06:08:07 localhost nova_compute[297021]: 2025-10-05 10:08:07.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:07.811 2 INFO neutron.agent.securitygroups_rpc [None req-144298d6-1015-4861-bc90-9514ae262cb1 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:08.116 2 INFO neutron.agent.securitygroups_rpc [None req-2ba3cc32-e760-4e9c-ae49-d16c10977ee3 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:08.513 2 INFO neutron.agent.securitygroups_rpc [None req-75847f43-d349-4d47-aac8-c32b688fcaaf 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:08.899 2 INFO neutron.agent.securitygroups_rpc [None req-597813ba-8c7b-483a-9702-9c6315f34eb1 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:09 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:09.190 2 INFO neutron.agent.securitygroups_rpc [None req-f12db9b7-ef7e-487f-94ef-0f8cc2068c7f 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:09 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:09.544 2 INFO neutron.agent.securitygroups_rpc [None req-14148e70-70fe-4caa-9845-3c7fbf64477d 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:08:09 localhost podman[334613]: 2025-10-05 10:08:09.676618444 +0000 UTC m=+0.085955559 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Oct 5 06:08:09 localhost podman[334613]: 2025-10-05 10:08:09.715782885 +0000 UTC m=+0.125120020 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:08:09 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:08:09 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:09.817 2 INFO neutron.agent.securitygroups_rpc [None req-2ac582cd-0625-449c-b0ff-bc862ac98f6d 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:10 localhost nova_compute[297021]: 2025-10-05 10:08:10.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:10 localhost nova_compute[297021]: 2025-10-05 10:08:10.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:10 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:10.949 2 INFO neutron.agent.securitygroups_rpc [None req-e45b1d7d-8924-460e-a1d3-88a484cb60d9 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['537a11fb-699e-467e-a2b8-e26ed1f1f5c6']#033[00m Oct 5 06:08:11 localhost nova_compute[297021]: 2025-10-05 10:08:11.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e163 do_prune osdmap full prune enabled Oct 5 06:08:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e164 e164: 6 total, 6 up, 6 in Oct 5 06:08:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e164: 6 total, 6 up, 6 in Oct 5 06:08:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:08:12 localhost podman[334632]: 2025-10-05 10:08:12.687778909 +0000 UTC m=+0.090779699 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 06:08:12 localhost podman[334632]: 2025-10-05 10:08:12.701931779 +0000 UTC m=+0.104932579 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm) Oct 5 06:08:12 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:08:12 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:12.778 2 INFO neutron.agent.securitygroups_rpc [None req-6201c560-0d97-401f-bed6-27a3a279e37c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['6c86b567-a80c-4484-8582-b269952d5c98']#033[00m Oct 5 06:08:13 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:13.091 2 INFO neutron.agent.securitygroups_rpc [None req-9e8e82c4-0a2c-42e9-a046-19758c10d04a 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['6c86b567-a80c-4484-8582-b269952d5c98']#033[00m Oct 5 06:08:13 localhost nova_compute[297021]: 2025-10-05 10:08:13.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:13 localhost nova_compute[297021]: 2025-10-05 10:08:13.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:08:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e164 do_prune osdmap full prune enabled Oct 5 06:08:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e165 e165: 6 total, 6 up, 6 in Oct 5 06:08:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e165: 6 total, 6 up, 6 in Oct 5 06:08:14 localhost nova_compute[297021]: 2025-10-05 10:08:14.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:14 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:14.957 2 INFO neutron.agent.securitygroups_rpc [None req-85222545-f8d6-422b-a67e-44c69923b0cd 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:15 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:15.063 2 INFO neutron.agent.securitygroups_rpc [None req-de1b1e72-8a6c-4b7b-b7af-986bf55b22e2 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['58dad359-5800-4e6b-8895-59e7fd2c651a']#033[00m Oct 5 06:08:15 localhost nova_compute[297021]: 2025-10-05 10:08:15.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:15 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:15.373 2 INFO neutron.agent.securitygroups_rpc [None req-de3a123d-6c60-4f41-ac0d-b90ea96acf15 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:15 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:15.379 2 INFO neutron.agent.securitygroups_rpc [None req-7cf0dae4-4090-4a92-956d-ad2afe4a64bb 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['58dad359-5800-4e6b-8895-59e7fd2c651a']#033[00m Oct 5 06:08:15 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:15.403 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:08:15 localhost systemd[1]: tmp-crun.QmwUqF.mount: Deactivated successfully. Oct 5 06:08:15 localhost podman[334652]: 2025-10-05 10:08:15.671748334 +0000 UTC m=+0.078065597 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:08:15 localhost podman[334652]: 2025-10-05 10:08:15.708889751 +0000 UTC m=+0.115207004 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:08:15 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:08:15 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:15.943 2 INFO neutron.agent.securitygroups_rpc [None req-49621add-f870-4536-93e0-3a0d1f472d8b 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/816015746' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/816015746' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:16 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:16.087 2 INFO neutron.agent.securitygroups_rpc [None req-9b29eb26-2b22-4613-b3b7-4a016dc2a02c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:16 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:16.206 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:16 localhost nova_compute[297021]: 2025-10-05 10:08:16.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:16 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:16.208 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:08:16 localhost nova_compute[297021]: 2025-10-05 10:08:16.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:16 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:16.481 2 INFO neutron.agent.securitygroups_rpc [None req-ab5f092d-dcda-4c6f-8450-ab56670675a0 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:16 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:16.507 2 INFO neutron.agent.securitygroups_rpc [None req-cf6909d2-23cc-49f5-96ca-6c9380a21fff 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:16 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:16.528 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:16 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:16.849 2 INFO neutron.agent.securitygroups_rpc [None req-07930e6e-c307-44e9-a139-d00dd44cc84c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:17.257 2 INFO neutron.agent.securitygroups_rpc [None req-973a305c-8af9-4ec1-94f7-2edc593eba32 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:17.419 2 INFO neutron.agent.securitygroups_rpc [None req-6754f8fa-755c-46fd-a6ca-e0f398a31797 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:17 localhost nova_compute[297021]: 2025-10-05 10:08:17.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:17.661 2 INFO neutron.agent.securitygroups_rpc [None req-04379d75-12fa-4c5c-bf8a-78d937b944a7 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:17.915 2 INFO neutron.agent.securitygroups_rpc [None req-4dac2c99-1702-43da-bde0-d18a60eeb90c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:17.945 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:17.947 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:17.950 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:17.957 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3358a5fd-a2d6-47d5-9a84-163b10a07931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:18 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:18.183 2 INFO neutron.agent.securitygroups_rpc [None req-fa3e679b-0b2a-4fe6-b41b-ad06e38f79b9 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:18 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:18.204 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:18 localhost nova_compute[297021]: 2025-10-05 10:08:18.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:18 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:18.480 2 INFO neutron.agent.securitygroups_rpc [None req-74735a9a-80e5-4814-860d-f9b2a32232ca 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['26da3659-ebd2-47ce-a46d-c888047f4570']#033[00m Oct 5 06:08:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e165 do_prune osdmap full prune enabled Oct 5 06:08:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e166 e166: 6 total, 6 up, 6 in Oct 5 06:08:18 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e166: 6 total, 6 up, 6 in Oct 5 06:08:18 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:18.947 2 INFO neutron.agent.securitygroups_rpc [None req-fdf57d7a-2c8a-4d86-a812-b647fef0ddda f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:19 localhost nova_compute[297021]: 2025-10-05 10:08:19.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:19 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:19.727 2 INFO neutron.agent.securitygroups_rpc [None req-abce5ea6-68db-4abb-9930-124c328ad3d9 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:20 localhost nova_compute[297021]: 2025-10-05 10:08:20.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.470 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.471 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.472 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:08:20 localhost podman[334677]: 2025-10-05 10:08:20.680571624 +0000 UTC m=+0.083228835 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:08:20 localhost podman[334677]: 2025-10-05 10:08:20.687924251 +0000 UTC m=+0.090581482 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:08:20 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:08:20 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:20.903 272040 INFO neutron.agent.linux.ip_lib [None req-e273eaaf-d21d-4240-b220-6ca410f8a049 - - - - - -] Device tapa684d0b8-21 cannot be used as it has no MAC address#033[00m Oct 5 06:08:20 localhost nova_compute[297021]: 2025-10-05 10:08:20.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:20 localhost kernel: device tapa684d0b8-21 entered promiscuous mode Oct 5 06:08:20 localhost NetworkManager[5981]: [1759658900.9378] manager: (tapa684d0b8-21): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Oct 5 06:08:20 localhost nova_compute[297021]: 2025-10-05 10:08:20.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:20 localhost ovn_controller[157794]: 2025-10-05T10:08:20Z|00346|binding|INFO|Claiming lport a684d0b8-219a-4236-868f-b4b671ed8338 for this chassis. Oct 5 06:08:20 localhost ovn_controller[157794]: 2025-10-05T10:08:20Z|00347|binding|INFO|a684d0b8-219a-4236-868f-b4b671ed8338: Claiming unknown Oct 5 06:08:20 localhost systemd-udevd[334710]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.949 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1c05772-11ce-49c5-9c9a-89c03bd89305, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a684d0b8-219a-4236-868f-b4b671ed8338) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.951 163434 INFO neutron.agent.ovn.metadata.agent [-] Port a684d0b8-219a-4236-868f-b4b671ed8338 in datapath 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20 bound to our chassis#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.953 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:20.953 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[15d34303-31d3-49d9-a11f-082988e04f96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:20 localhost ovn_controller[157794]: 2025-10-05T10:08:20Z|00348|binding|INFO|Setting lport a684d0b8-219a-4236-868f-b4b671ed8338 ovn-installed in OVS Oct 5 06:08:20 localhost ovn_controller[157794]: 2025-10-05T10:08:20Z|00349|binding|INFO|Setting lport a684d0b8-219a-4236-868f-b4b671ed8338 up in Southbound Oct 5 06:08:20 localhost nova_compute[297021]: 2025-10-05 10:08:20.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:20 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:21 localhost journal[237931]: ethtool ioctl error on tapa684d0b8-21: No such device Oct 5 06:08:21 localhost nova_compute[297021]: 2025-10-05 10:08:21.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:21 localhost nova_compute[297021]: 2025-10-05 10:08:21.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:21.410 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:21.412 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:21.414 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:21.416 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f8346f91-aff7-4db8-bcc0-fafa0a2cac6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:21 localhost podman[248506]: time="2025-10-05T10:08:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:08:21 localhost podman[248506]: @ - - [05/Oct/2025:10:08:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:08:21 localhost podman[248506]: @ - - [05/Oct/2025:10:08:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19380 "" "Go-http-client/1.1" Oct 5 06:08:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:21 localhost podman[334781]: Oct 5 06:08:21 localhost podman[334781]: 2025-10-05 10:08:21.912428437 +0000 UTC m=+0.092892755 container create acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:08:21 localhost systemd[1]: Started libpod-conmon-acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34.scope. Oct 5 06:08:21 localhost podman[334781]: 2025-10-05 10:08:21.865874928 +0000 UTC m=+0.046339276 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:21 localhost systemd[1]: Started libcrun container. Oct 5 06:08:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f17c269a3caade3f50a7b534ee2464bd80143a97c24eaf771c8c906559472b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:21 localhost podman[334781]: 2025-10-05 10:08:21.991972793 +0000 UTC m=+0.172437101 container init acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:08:22 localhost podman[334781]: 2025-10-05 10:08:22.003095262 +0000 UTC m=+0.183559570 container start acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:08:22 localhost dnsmasq[334800]: started, version 2.85 cachesize 150 Oct 5 06:08:22 localhost dnsmasq[334800]: DNS service limited to local subnets Oct 5 06:08:22 localhost dnsmasq[334800]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:22 localhost dnsmasq[334800]: warning: no upstream servers configured Oct 5 06:08:22 localhost dnsmasq-dhcp[334800]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:22 localhost dnsmasq[334800]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 0 addresses Oct 5 06:08:22 localhost dnsmasq-dhcp[334800]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:22 localhost dnsmasq-dhcp[334800]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:22 localhost openstack_network_exporter[250601]: ERROR 10:08:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:08:22 localhost openstack_network_exporter[250601]: ERROR 10:08:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:22 localhost openstack_network_exporter[250601]: ERROR 10:08:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:22 localhost openstack_network_exporter[250601]: ERROR 10:08:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:08:22 localhost openstack_network_exporter[250601]: Oct 5 06:08:22 localhost openstack_network_exporter[250601]: ERROR 10:08:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:08:22 localhost openstack_network_exporter[250601]: Oct 5 06:08:22 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:22.185 272040 INFO neutron.agent.dhcp.agent [None req-610d5876-ef36-4250-a239-8c529122349c - - - - - -] DHCP configuration for ports {'da243c3c-62f4-42e5-a938-23cdc589ecbd'} is completed#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.456 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.457 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.457 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.457 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.458 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:22.724 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:22.726 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:22.728 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:22.729 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[abfdc41e-bea3-4f4a-bb17-cc95871180b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:08:22 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2068844653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.859 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.932 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:08:22 localhost nova_compute[297021]: 2025-10-05 10:08:22.933 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:08:23 localhost dnsmasq[334800]: exiting on receipt of SIGTERM Oct 5 06:08:23 localhost podman[334840]: 2025-10-05 10:08:23.112154299 +0000 UTC m=+0.067387750 container kill acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:08:23 localhost systemd[1]: libpod-acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34.scope: Deactivated successfully. Oct 5 06:08:23 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:23.168 2 INFO neutron.agent.securitygroups_rpc [None req-d6921d14-97db-4869-b366-00d629b9750a f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.181 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.183 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11198MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.184 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.184 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:08:23 localhost podman[334852]: 2025-10-05 10:08:23.190999516 +0000 UTC m=+0.065413308 container died acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:08:23 localhost podman[334852]: 2025-10-05 10:08:23.221455913 +0000 UTC m=+0.095869665 container cleanup acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:08:23 localhost systemd[1]: libpod-conmon-acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34.scope: Deactivated successfully. Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.259 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.260 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.260 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:08:23 localhost podman[334854]: 2025-10-05 10:08:23.269273417 +0000 UTC m=+0.134548343 container remove acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.323 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:23.328 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:92:4c 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1c05772-11ce-49c5-9c9a-89c03bd89305, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=da243c3c-62f4-42e5-a938-23cdc589ecbd) old=Port_Binding(mac=['fa:16:3e:ca:92:4c 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:23.330 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port da243c3c-62f4-42e5-a938-23cdc589ecbd in datapath 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20 updated#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:23.333 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 64a0c403-81c9-477a-9e6e-69005cd7fc2e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:23.333 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:23.334 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[03f0e01e-984f-41f2-8847-993ca97b5a09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:08:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1161321295' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.783 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.791 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.814 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:08:23 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:23.823 2 INFO neutron.agent.securitygroups_rpc [None req-d51aca8d-da7c-4704-906f-6931aa168f0c f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.840 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:08:23 localhost nova_compute[297021]: 2025-10-05 10:08:23.840 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:08:23 localhost systemd[1]: var-lib-containers-storage-overlay-a0f17c269a3caade3f50a7b534ee2464bd80143a97c24eaf771c8c906559472b-merged.mount: Deactivated successfully. Oct 5 06:08:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acfdb752f7e1f023c5adea2b18b5986b71a801445ac782b8197bc11d93f9ce34-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:24.004 2 INFO neutron.agent.securitygroups_rpc [None req-a1ba6929-3c17-4bbf-a2e7-fdef375ea348 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:24.210 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:08:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:24.603 2 INFO neutron.agent.securitygroups_rpc [None req-c63ba15e-4b1c-43e5-b818-071f4bd9ad21 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:24 localhost podman[334953]: Oct 5 06:08:24 localhost podman[334953]: 2025-10-05 10:08:24.640212534 +0000 UTC m=+0.089059852 container create a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:08:24 localhost systemd[1]: Started libpod-conmon-a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced.scope. Oct 5 06:08:24 localhost systemd[1]: tmp-crun.X8o40G.mount: Deactivated successfully. Oct 5 06:08:24 localhost podman[334953]: 2025-10-05 10:08:24.59721083 +0000 UTC m=+0.046058178 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:24 localhost systemd[1]: Started libcrun container. Oct 5 06:08:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b4e58b647134c78aac2b83b3c8a44df763635dcb491e6d46ff7d531a58360c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:24 localhost podman[334953]: 2025-10-05 10:08:24.71380495 +0000 UTC m=+0.162652258 container init a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:24 localhost podman[334953]: 2025-10-05 10:08:24.72345167 +0000 UTC m=+0.172298988 container start a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:08:24 localhost dnsmasq[334972]: started, version 2.85 cachesize 150 Oct 5 06:08:24 localhost dnsmasq[334972]: DNS service limited to local subnets Oct 5 06:08:24 localhost dnsmasq[334972]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:24 localhost dnsmasq[334972]: warning: no upstream servers configured Oct 5 06:08:24 localhost dnsmasq-dhcp[334972]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:24 localhost dnsmasq-dhcp[334972]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 5 06:08:24 localhost dnsmasq[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 0 addresses Oct 5 06:08:24 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:24 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:24 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:24.783 272040 INFO neutron.agent.dhcp.agent [None req-b1fd2c12-92b2-4cd1-be32-0e488694cfef - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:23Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=8209911f-b113-4605-9542-6d7b1af1a8dd, ip_allocation=immediate, mac_address=fa:16:3e:b7:18:e8, name=tempest-PortsTestJSON-737503736, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:08:18Z, description=, dns_domain=, id=57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-472409125, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32735, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=2387, status=ACTIVE, subnets=['a00c9aaf-0da1-4ec8-94f5-831ef09736c2', 'bed6c3db-7a52-416e-8c98-286ca53adde4'], tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:22Z, vlan_transparent=None, network_id=57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['403ef325-843a-42e9-9412-a4f8fc546f92'], standard_attr_id=2434, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:23Z on network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20#033[00m Oct 5 06:08:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:25.036 272040 INFO neutron.agent.dhcp.agent [None req-ba45404b-2cd3-451c-910b-294217504f6a - - - - - -] DHCP configuration for ports {'a684d0b8-219a-4236-868f-b4b671ed8338', 'da243c3c-62f4-42e5-a938-23cdc589ecbd'} is completed#033[00m Oct 5 06:08:25 localhost dnsmasq[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 2 addresses Oct 5 06:08:25 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:25 localhost podman[334990]: 2025-10-05 10:08:25.078214615 +0000 UTC m=+0.063988509 container kill a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:08:25 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:25.232 272040 INFO neutron.agent.dhcp.agent [None req-f22c23fa-857e-41f2-8c90-eea53e18fef9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:23Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8209911f-b113-4605-9542-6d7b1af1a8dd, ip_allocation=immediate, mac_address=fa:16:3e:b7:18:e8, name=tempest-PortsTestJSON-737503736, network_id=57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['403ef325-843a-42e9-9412-a4f8fc546f92'], standard_attr_id=2434, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:24Z on network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20#033[00m Oct 5 06:08:25 localhost nova_compute[297021]: 2025-10-05 10:08:25.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:25.332 272040 INFO neutron.agent.dhcp.agent [None req-5ed5fabc-226b-4dea-9f51-085e445f0cdf - - - - - -] DHCP configuration for ports {'8209911f-b113-4605-9542-6d7b1af1a8dd'} is completed#033[00m Oct 5 06:08:25 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:25.525 2 INFO neutron.agent.securitygroups_rpc [None req-e0f71c4c-b824-4813-9d2e-f9400293976a 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:25 localhost dnsmasq[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 1 addresses Oct 5 06:08:25 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:25 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:25 localhost podman[335029]: 2025-10-05 10:08:25.537896467 +0000 UTC m=+0.063249840 container kill a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:08:25 localhost systemd[1]: tmp-crun.bC7ImU.mount: Deactivated successfully. Oct 5 06:08:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:25.679 272040 INFO neutron.agent.dhcp.agent [None req-7ca291c0-5426-42a1-abe0-49a4d1b6d8f4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:23Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=8209911f-b113-4605-9542-6d7b1af1a8dd, ip_allocation=immediate, mac_address=fa:16:3e:b7:18:e8, name=tempest-PortsTestJSON-737503736, network_id=57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['403ef325-843a-42e9-9412-a4f8fc546f92'], standard_attr_id=2434, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:25Z on network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20#033[00m Oct 5 06:08:25 localhost nova_compute[297021]: 2025-10-05 10:08:25.842 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:25 localhost nova_compute[297021]: 2025-10-05 10:08:25.843 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:08:25 localhost nova_compute[297021]: 2025-10-05 10:08:25.843 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:08:25 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:25.972 272040 INFO neutron.agent.dhcp.agent [None req-223369f5-7f60-482f-9d96-70f428ebc4b6 - - - - - -] DHCP configuration for ports {'8209911f-b113-4605-9542-6d7b1af1a8dd'} is completed#033[00m Oct 5 06:08:26 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:26.049 2 INFO neutron.agent.securitygroups_rpc [None req-27332e57-044b-4d32-bccf-cfbbc12d6206 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:26 localhost dnsmasq[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 2 addresses Oct 5 06:08:26 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:26 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:26 localhost systemd[1]: tmp-crun.DctHPb.mount: Deactivated successfully. Oct 5 06:08:26 localhost podman[335067]: 2025-10-05 10:08:26.168369244 +0000 UTC m=+0.067827283 container kill a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:08:26 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:26.387 272040 INFO neutron.agent.dhcp.agent [None req-81f2d4a9-5654-4b81-b9f2-a286f530a066 - - - - - -] DHCP configuration for ports {'8209911f-b113-4605-9542-6d7b1af1a8dd'} is completed#033[00m Oct 5 06:08:26 localhost dnsmasq[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 0 addresses Oct 5 06:08:26 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:26 localhost dnsmasq-dhcp[334972]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:26 localhost podman[335101]: 2025-10-05 10:08:26.594219597 +0000 UTC m=+0.067322618 container kill a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:26 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:26.844 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:26 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:26.847 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:26 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:26.849 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:26 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:26.850 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[389e0ce7-2f42-43a5-ad6a-51754e78bd3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:27 localhost dnsmasq[334972]: exiting on receipt of SIGTERM Oct 5 06:08:27 localhost podman[335140]: 2025-10-05 10:08:27.005039246 +0000 UTC m=+0.063685840 container kill a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.012 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.012 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.013 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.014 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:08:27 localhost systemd[1]: libpod-a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced.scope: Deactivated successfully. Oct 5 06:08:27 localhost podman[335154]: 2025-10-05 10:08:27.089086213 +0000 UTC m=+0.067463622 container died a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:08:27 localhost podman[335154]: 2025-10-05 10:08:27.117447784 +0000 UTC m=+0.095825153 container cleanup a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:08:27 localhost systemd[1]: libpod-conmon-a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced.scope: Deactivated successfully. Oct 5 06:08:27 localhost systemd[1]: var-lib-containers-storage-overlay-f9b4e58b647134c78aac2b83b3c8a44df763635dcb491e6d46ff7d531a58360c-merged.mount: Deactivated successfully. Oct 5 06:08:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:27 localhost podman[335157]: 2025-10-05 10:08:27.180855187 +0000 UTC m=+0.146116504 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.build-date=20251001) Oct 5 06:08:27 localhost podman[335157]: 2025-10-05 10:08:27.220989534 +0000 UTC m=+0.186250851 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:27 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:08:27 localhost podman[335156]: 2025-10-05 10:08:27.273930236 +0000 UTC m=+0.246931481 container remove a660a8ec419d9ec1b8ea6131ba49786637d483f4a73d81a1a4047872967b7ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:08:27 localhost podman[335163]: 2025-10-05 10:08:27.283721419 +0000 UTC m=+0.244732142 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001) Oct 5 06:08:27 localhost podman[335163]: 2025-10-05 10:08:27.323959839 +0000 UTC m=+0.284970602 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0) Oct 5 06:08:27 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:08:27 localhost ovn_controller[157794]: 2025-10-05T10:08:27Z|00350|binding|INFO|Removing iface tapa684d0b8-21 ovn-installed in OVS Oct 5 06:08:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:27.495 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 64a0c403-81c9-477a-9e6e-69005cd7fc2e with type ""#033[00m Oct 5 06:08:27 localhost ovn_controller[157794]: 2025-10-05T10:08:27Z|00351|binding|INFO|Removing lport a684d0b8-219a-4236-868f-b4b671ed8338 ovn-installed in OVS Oct 5 06:08:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:27.497 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1c05772-11ce-49c5-9c9a-89c03bd89305, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a684d0b8-219a-4236-868f-b4b671ed8338) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:27.498 163434 INFO neutron.agent.ovn.metadata.agent [-] Port a684d0b8-219a-4236-868f-b4b671ed8338 in datapath 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20 unbound from our chassis#033[00m Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:27.500 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:27 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:27.500 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[5cca940a-f964-43e9-8dc7-56de0371cbf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:27 localhost ovn_controller[157794]: 2025-10-05T10:08:27Z|00352|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:08:27 localhost nova_compute[297021]: 2025-10-05 10:08:27.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:28 localhost nova_compute[297021]: 2025-10-05 10:08:28.034 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:08:28 localhost nova_compute[297021]: 2025-10-05 10:08:28.049 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:08:28 localhost nova_compute[297021]: 2025-10-05 10:08:28.049 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:08:28 localhost podman[335263]: Oct 5 06:08:28 localhost podman[335263]: 2025-10-05 10:08:28.217331695 +0000 UTC m=+0.094026835 container create beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:08:28 localhost systemd[1]: Started libpod-conmon-beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd.scope. Oct 5 06:08:28 localhost podman[335263]: 2025-10-05 10:08:28.171750431 +0000 UTC m=+0.048445601 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:28 localhost systemd[1]: Started libcrun container. Oct 5 06:08:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c081c087ad5e4a27cda1a32c2d21334370ea0f26ba14bed3f59677e69f3ed69d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:28 localhost podman[335263]: 2025-10-05 10:08:28.293627603 +0000 UTC m=+0.170322743 container init beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:08:28 localhost podman[335263]: 2025-10-05 10:08:28.303451657 +0000 UTC m=+0.180146797 container start beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:08:28 localhost dnsmasq[335282]: started, version 2.85 cachesize 150 Oct 5 06:08:28 localhost dnsmasq[335282]: DNS service limited to local subnets Oct 5 06:08:28 localhost dnsmasq[335282]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:28 localhost dnsmasq[335282]: warning: no upstream servers configured Oct 5 06:08:28 localhost dnsmasq-dhcp[335282]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:28 localhost dnsmasq[335282]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/addn_hosts - 0 addresses Oct 5 06:08:28 localhost dnsmasq-dhcp[335282]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/host Oct 5 06:08:28 localhost dnsmasq-dhcp[335282]: read /var/lib/neutron/dhcp/57b1a27a-3bbd-4de3-add9-1f79ae5d5e20/opts Oct 5 06:08:28 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:28.430 272040 INFO neutron.agent.dhcp.agent [None req-26bc62d6-665b-4885-81ef-466074697e12 - - - - - -] DHCP configuration for ports {'a684d0b8-219a-4236-868f-b4b671ed8338', 'da243c3c-62f4-42e5-a938-23cdc589ecbd'} is completed#033[00m Oct 5 06:08:28 localhost podman[335300]: 2025-10-05 10:08:28.545892106 +0000 UTC m=+0.050762174 container kill beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:08:28 localhost dnsmasq[335282]: exiting on receipt of SIGTERM Oct 5 06:08:28 localhost systemd[1]: libpod-beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd.scope: Deactivated successfully. Oct 5 06:08:28 localhost podman[335315]: 2025-10-05 10:08:28.621478815 +0000 UTC m=+0.051771050 container died beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:08:28 localhost podman[335315]: 2025-10-05 10:08:28.66409931 +0000 UTC m=+0.094391495 container remove beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:08:28 localhost nova_compute[297021]: 2025-10-05 10:08:28.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:28 localhost kernel: device tapa684d0b8-21 left promiscuous mode Oct 5 06:08:28 localhost nova_compute[297021]: 2025-10-05 10:08:28.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:28 localhost systemd[1]: libpod-conmon-beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd.scope: Deactivated successfully. Oct 5 06:08:28 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:28.755 272040 INFO neutron.agent.dhcp.agent [None req-542423cb-0ddf-4ec1-b8f4-67ed13014d2e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:28 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:28.756 272040 INFO neutron.agent.dhcp.agent [None req-542423cb-0ddf-4ec1-b8f4-67ed13014d2e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:28.836 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:28.838 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:28.840 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:28.841 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[3d399e97-cfaa-4718-af0c-aa00e76d7016]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:29 localhost systemd[1]: var-lib-containers-storage-overlay-c081c087ad5e4a27cda1a32c2d21334370ea0f26ba14bed3f59677e69f3ed69d-merged.mount: Deactivated successfully. Oct 5 06:08:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-beca14a3d58cc01c9085c9ddec8e8e01203d285cbbc0b69dd8c01f6127ce1efd-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:29 localhost systemd[1]: run-netns-qdhcp\x2d57b1a27a\x2d3bbd\x2d4de3\x2dadd9\x2d1f79ae5d5e20.mount: Deactivated successfully. Oct 5 06:08:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:08:30 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:08:30 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:30.301 2 INFO neutron.agent.securitygroups_rpc [None req-2eb29681-e6ef-46ba-8164-f72576369c13 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:30 localhost nova_compute[297021]: 2025-10-05 10:08:30.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:30 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:08:30 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:08:30 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e166 do_prune osdmap full prune enabled Oct 5 06:08:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e167 e167: 6 total, 6 up, 6 in Oct 5 06:08:31 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e167: 6 total, 6 up, 6 in Oct 5 06:08:31 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:31.058 2 INFO neutron.agent.securitygroups_rpc [None req-be59150d-24ad-45ae-bfd9-19a1a83e055f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:31 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:31.161 2 INFO neutron.agent.securitygroups_rpc [None req-7d247693-29e2-4c27-86b9-d6a3c432c80d 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:31 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:31.644 2 INFO neutron.agent.securitygroups_rpc [None req-39bd4488-6c14-403b-b8d2-d9ce2663591d 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:31 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3139120677' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:31 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3139120677' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:08:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:08:32 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:08:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:32.329 2 INFO neutron.agent.securitygroups_rpc [None req-5ea7cfba-1518-4eca-931c-3192665c5231 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:08:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:32.686 2 INFO neutron.agent.securitygroups_rpc [None req-bf81a202-147c-434d-a64b-f623ced280eb 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:32 localhost podman[335427]: 2025-10-05 10:08:32.68701833 +0000 UTC m=+0.094076537 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true) Oct 5 06:08:32 localhost podman[335427]: 2025-10-05 10:08:32.717171379 +0000 UTC m=+0.124229586 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:08:32 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:08:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:32.956 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:32.957 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:32.959 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:32.960 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[09694a0c-be8d-4cfd-ac43-1defcc748cc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e167 do_prune osdmap full prune enabled Oct 5 06:08:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e168 e168: 6 total, 6 up, 6 in Oct 5 06:08:33 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e168: 6 total, 6 up, 6 in Oct 5 06:08:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e168 do_prune osdmap full prune enabled Oct 5 06:08:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e169 e169: 6 total, 6 up, 6 in Oct 5 06:08:35 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e169: 6 total, 6 up, 6 in Oct 5 06:08:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:35.157 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:35.159 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:35.161 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:35.162 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e19bdf-fc42-46c1-abe7-393b8bb14215]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:35 localhost nova_compute[297021]: 2025-10-05 10:08:35.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:08:35 localhost nova_compute[297021]: 2025-10-05 10:08:35.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:08:35 localhost nova_compute[297021]: 2025-10-05 10:08:35.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:08:35 localhost nova_compute[297021]: 2025-10-05 10:08:35.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:08:35 localhost nova_compute[297021]: 2025-10-05 10:08:35.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:35 localhost nova_compute[297021]: 2025-10-05 10:08:35.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:08:35 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:35.820 2 INFO neutron.agent.securitygroups_rpc [None req-4fe56172-abeb-4404-b26f-e7876d89e5d1 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:36 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3240874194' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:36 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3240874194' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e169 do_prune osdmap full prune enabled Oct 5 06:08:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e170 e170: 6 total, 6 up, 6 in Oct 5 06:08:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e170: 6 total, 6 up, 6 in Oct 5 06:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:08:36 localhost podman[335445]: 2025-10-05 10:08:36.678568356 +0000 UTC m=+0.085748362 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:08:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:36 localhost podman[335445]: 2025-10-05 10:08:36.748893565 +0000 UTC m=+0.156073591 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:08:36 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:08:36 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:36.863 2 INFO neutron.agent.securitygroups_rpc [None req-ee8e0800-10a4-4955-b11c-b453e546bec5 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:38 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:38.234 272040 INFO neutron.agent.linux.ip_lib [None req-f6ba9667-2f67-4649-be66-261f5fd23790 - - - - - -] Device tapf6666f61-0e cannot be used as it has no MAC address#033[00m Oct 5 06:08:38 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:38.236 2 INFO neutron.agent.securitygroups_rpc [None req-ea92753e-0357-45dc-8c63-b37c505d2763 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:38 localhost nova_compute[297021]: 2025-10-05 10:08:38.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:38 localhost kernel: device tapf6666f61-0e entered promiscuous mode Oct 5 06:08:38 localhost NetworkManager[5981]: [1759658918.2695] manager: (tapf6666f61-0e): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Oct 5 06:08:38 localhost ovn_controller[157794]: 2025-10-05T10:08:38Z|00353|binding|INFO|Claiming lport f6666f61-0e91-4e1f-9f78-41f7259e7822 for this chassis. Oct 5 06:08:38 localhost ovn_controller[157794]: 2025-10-05T10:08:38Z|00354|binding|INFO|f6666f61-0e91-4e1f-9f78-41f7259e7822: Claiming unknown Oct 5 06:08:38 localhost nova_compute[297021]: 2025-10-05 10:08:38.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:38 localhost systemd-udevd[335481]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:08:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:38.285 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3f800b06-45d0-44a2-94b5-eaa2adcba16d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f800b06-45d0-44a2-94b5-eaa2adcba16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9ecbbf-51f6-4c1a-8657-b8df63f7737f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f6666f61-0e91-4e1f-9f78-41f7259e7822) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:38.290 163434 INFO neutron.agent.ovn.metadata.agent [-] Port f6666f61-0e91-4e1f-9f78-41f7259e7822 in datapath 3f800b06-45d0-44a2-94b5-eaa2adcba16d bound to our chassis#033[00m Oct 5 06:08:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:38.292 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9dd054c9-941a-417c-a1c4-b75de108b958 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:08:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:38.292 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f800b06-45d0-44a2-94b5-eaa2adcba16d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:38.294 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[c43151dd-98dc-4bb9-834b-eb84e7ee510a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost ovn_controller[157794]: 2025-10-05T10:08:38Z|00355|binding|INFO|Setting lport f6666f61-0e91-4e1f-9f78-41f7259e7822 ovn-installed in OVS Oct 5 06:08:38 localhost ovn_controller[157794]: 2025-10-05T10:08:38Z|00356|binding|INFO|Setting lport f6666f61-0e91-4e1f-9f78-41f7259e7822 up in Southbound Oct 5 06:08:38 localhost nova_compute[297021]: 2025-10-05 10:08:38.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost journal[237931]: ethtool ioctl error on tapf6666f61-0e: No such device Oct 5 06:08:38 localhost nova_compute[297021]: 2025-10-05 10:08:38.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:38 localhost nova_compute[297021]: 2025-10-05 10:08:38.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.839 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.840 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.844 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f447df9c-f69e-4395-8479-0049c2824ae0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.841298', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4357641c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': 'd90ad6d1abacab20b2f98e2087e41dc6f4e0e11e2d0cdcda0afdb8f71f44b615'}]}, 'timestamp': '2025-10-05 10:08:38.846054', '_unique_id': '142ce638ccca46e282dad7793de093ff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.847 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.848 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.851 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7cd25ea4-0ba4-4c17-910f-df2b2d11a344', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.851544', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '435852dc-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': '4d5e21b2a030e3827d3f106c0934c8597a6b47315659e1fd25e249fe79c22e13'}]}, 'timestamp': '2025-10-05 10:08:38.852112', '_unique_id': '51862a6e7ac643c48d368f2d619fbc19'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.853 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.854 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.875 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.876 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd116636a-c698-4e18-b526-dd5921bd770a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.854540', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '435bf838-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': 'c212797a0903f61d6a8fba3cbc6ad7f3bb697c934f3ad6c1837ab7832228b4ab'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.854540', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '435c0ac6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '9282ed7fc697a3e8eefc9158bcf92a574f5999405c12618f641f86c49c31f3ba'}]}, 'timestamp': '2025-10-05 10:08:38.876497', '_unique_id': 'cabd61705bdd495594946b7fbce23596'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.877 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.878 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.889 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.890 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '416a1e5d-2621-4f43-b20e-4fda9d6ff574', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.879115', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '435e287e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.103047271, 'message_signature': 'c364aa3e680833dce85af615b41b1ff92b0ead58250ddaca045acfe26fb608e6'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.879115', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '435e3cd8-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.103047271, 'message_signature': '64cc84ba700ca1dc6190c1c35fdc70d99ba786bf085ed5c12140b902d0d3508f'}]}, 'timestamp': '2025-10-05 10:08:38.890827', '_unique_id': 'd5047d4389a5441a8ea0793d3676bd38'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.891 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.893 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.893 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.893 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.893 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '06f410e4-365d-415a-b5a1-d015ad29532e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.893278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '435eaee8-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '3aa97b248f877a5c169152cc748cc1652c982c648ccfbcc7ab86ede6b2f60cde'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.893278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '435ec0d6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': 'b086df26215360c5673ab462ee1d38a0f756570cc34a6919c0d2146b427420a4'}]}, 'timestamp': '2025-10-05 10:08:38.894252', '_unique_id': '401bf1e0bf0d4c52bbe5af4413047207'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.896 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.913 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 17510000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '86d84d1b-9aff-4abb-845f-e2ed2fff5fb8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 17510000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:08:38.896656', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '4361cea2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.137548737, 'message_signature': '835ff0c23a5aa7a482f98ee9a6715171377b8d94581aee7aef2ce12cf5864a38'}]}, 'timestamp': '2025-10-05 10:08:38.914247', '_unique_id': 'a08f9c6905ab4731900c182eca1a9919'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.916 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.916 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.916 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '92c971d7-a188-40a7-bf73-204c97d6fda3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.916706', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43624120-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': 'd3f57a1b55f2ea711d5c68974bf72380b9f2b6b017218bd851ae4944caf5f0d5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.916706', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4362516a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '4e45b031edadd7f9b8c88a4cb343d967c56541db6ca939cf87442a6092e61ae4'}]}, 'timestamp': '2025-10-05 10:08:38.917596', '_unique_id': '167a732def1443bcbb5e7b14fa98609f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.920 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.920 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a412ef8d-8ead-4f69-b236-da30a42de0f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.920826', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4362e74c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': '6785fb9bed6fa6817ab7d15dfc61e5f10da9a5fc4c87851bb100520d2ad5e7f5'}]}, 'timestamp': '2025-10-05 10:08:38.921599', '_unique_id': 'db879017e0cf4e269b0cafe2d1cdea6c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.923 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.924 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.924 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa252e3e-36a8-4245-9a3e-157908c6ebb5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.923937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43636690-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '59c5fb043472741024cc34371bc072c49cd4df42c2041babce02bbf61673180f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.923937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43637734-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '6d36d5a70e92bbb0e2f685e1ba67579ea8c905d32bc0c501bc03d68d94850b73'}]}, 'timestamp': '2025-10-05 10:08:38.925085', '_unique_id': 'd5302c05950542fb99d66cf21fd08d32'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.926 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.927 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.927 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.928 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cef73c7e-214b-4a8a-b8e2-dfc9c7e225ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.927806', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4363f434-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '4f8e7e812788d40e6afa898cf0d338fd15d4b45f8961fa56e27256f747353b7c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.927806', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43640b0e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '64268919b8fcab709cb9bdabe263afa9e603fb8fc36fcd68089a35570237740a'}]}, 'timestamp': '2025-10-05 10:08:38.928899', '_unique_id': 'ef4b6839158f41f29e9e5b2f2662fa4b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.929 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.930 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '70a921bf-ad1f-4510-a1e3-1bb5094a74df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.931081', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '436471fc-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.103047271, 'message_signature': '1ac132d37d4197dfdfee7fbae4ae1e3402e687823a4edff0cb24f7814aa24b33'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.931081', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4364837c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.103047271, 'message_signature': '1b4435aa1a9d82fdaa469c139508268d3296bdc1a9d4f4d6ef7cf6758068160c'}]}, 'timestamp': '2025-10-05 10:08:38.931951', '_unique_id': '9c2ab2423fed4875947c6b9ec60711eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.932 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.934 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.934 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.934 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f5acf731-1c17-4792-9b87-2b8ca2aba5ec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.934278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4364f5aa-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '5a590cd4b52493c0bd0b157bf5ea14cb1089a0e63e1be474b20fe3f8d7dfe4a9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.934278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4365070c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.07845318, 'message_signature': '74bc0b2074f368984db5b36e28f94c0f75e09449c55eadc3923f1f69af0fb6b4'}]}, 'timestamp': '2025-10-05 10:08:38.935334', '_unique_id': '77bec5b9c7b849eaa93255dd38d1b4ac'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.937 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3e45504f-b089-42fd-baf1-7a12227b3cb4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.937660', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '436573b8-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': 'a57fc90f6fb5599368677de69ca980c6924a97c96ca3ebec874df63216217f1b'}]}, 'timestamp': '2025-10-05 10:08:38.938148', '_unique_id': '5c9c43c67e4c4fa8a7b4efca58224b25'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.940 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d7d79a8-1ad6-4512-8e5e-06139b8ffe37', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:08:38.940350', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4365dec0-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.137548737, 'message_signature': '69388983aba7babf82f7826e6633a6641adbc4f0afd52e80d11d7a5ec753197e'}]}, 'timestamp': '2025-10-05 10:08:38.940870', '_unique_id': 'cca967f135fd49cbba49ea9d2cb0758a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.942 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.943 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a51433c9-9c03-432d-80e8-484fde3676ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.943065', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4366468a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': 'cc9962f2ee84a6ac8ef4b5ddecebcf888e4a2c4c284348271e657c08db56be7d'}]}, 'timestamp': '2025-10-05 10:08:38.943577', '_unique_id': '48ea31cfffb9497894b2178e981e2e9e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ca7d88d7-52c6-496b-8731-3009a0495cae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:08:38.945785', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4366b08e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.103047271, 'message_signature': '46ccc104acfd7bbf22dfc9feaabe58aa179c0a41f3ae38cba3846a616e4446b7'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:08:38.945785', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4366c2c2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.103047271, 'message_signature': '5617bae48c1dbe24271962993e6e33fe81da8f5f25c6c4583d4828886be55e4f'}]}, 'timestamp': '2025-10-05 10:08:38.946694', '_unique_id': '3a5a4204d7134a10815759205d55fe5a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.948 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd304381b-08c0-4280-8e76-9c6b4a681a59', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.948945', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '43672c44-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': 'a939273560e77cd7db02b973355f9cce3c096b98a9b74d7d0c12ebef14421a52'}]}, 'timestamp': '2025-10-05 10:08:38.949458', '_unique_id': 'fec6fd1bc13c470fafc1fe106e0160e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7e876468-6427-41b8-9075-71c15b2aaa36', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.951636', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4367954e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': '0e1f89dfeccd6a3d2d7cdfcedbd802e855901906c85f5bf513ea7f47a64f839d'}]}, 'timestamp': '2025-10-05 10:08:38.952113', '_unique_id': 'ee085df21e1d4d30ba42b4896f885746'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.952 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.953 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4de3e77d-c07a-49b7-a1c4-7d369292876d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.953872', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4367e90e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': '7dfea8ea43b5b092177c893dcacb3a79f6cf78c14a35173245e347a885155d71'}]}, 'timestamp': '2025-10-05 10:08:38.954155', '_unique_id': 'da31cead804a4eef8a485bd352eb1ff9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.955 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.956 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f753cc60-aff5-4587-b51e-1774dc8298dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.956032', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '436842aa-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': '07d7c6545789905b9cfc5f04dc43a56fec04cca15815424c2ec781e46e01ee89'}]}, 'timestamp': '2025-10-05 10:08:38.956483', '_unique_id': '9ca00af9ff554252a219031a590aa8dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.957 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c4de317-588e-4935-9cf3-bd5ead8c98be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:08:38.957769', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '4368813e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12398.065410061, 'message_signature': '5053814ae743c16e9999db61d305cfeffec8be2b82003133b30f17c346cae636'}]}, 'timestamp': '2025-10-05 10:08:38.958051', '_unique_id': 'cb7db244f2504c8eb04869125208b6b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:08:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:08:38.958 12 ERROR oslo_messaging.notify.messaging Oct 5 06:08:39 localhost podman[335552]: Oct 5 06:08:39 localhost podman[335552]: 2025-10-05 10:08:39.31492983 +0000 UTC m=+0.109800400 container create e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:39 localhost podman[335552]: 2025-10-05 10:08:39.248063064 +0000 UTC m=+0.042933644 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:39 localhost systemd[1]: Started libpod-conmon-e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a.scope. Oct 5 06:08:39 localhost systemd[1]: tmp-crun.seC5BD.mount: Deactivated successfully. Oct 5 06:08:39 localhost systemd[1]: Started libcrun container. Oct 5 06:08:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c12070a2958ae095ddc59eee7872dd5e2f9cdf63ce8689951fea9da3b46e25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:39 localhost podman[335552]: 2025-10-05 10:08:39.428510959 +0000 UTC m=+0.223381539 container init e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:08:39 localhost podman[335552]: 2025-10-05 10:08:39.437836209 +0000 UTC m=+0.232706779 container start e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:08:39 localhost dnsmasq[335570]: started, version 2.85 cachesize 150 Oct 5 06:08:39 localhost dnsmasq[335570]: DNS service limited to local subnets Oct 5 06:08:39 localhost dnsmasq[335570]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:39 localhost dnsmasq[335570]: warning: no upstream servers configured Oct 5 06:08:39 localhost dnsmasq-dhcp[335570]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:39 localhost dnsmasq[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/addn_hosts - 0 addresses Oct 5 06:08:39 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/host Oct 5 06:08:39 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/opts Oct 5 06:08:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:39.496 272040 INFO neutron.agent.dhcp.agent [None req-10c4aa88-a19a-4c12-aeba-0cde58ca48bf - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:37Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=35b57481-ee3b-4858-83b8-caa0a97c2cf8, ip_allocation=immediate, mac_address=fa:16:3e:57:72:b6, name=tempest-PortsTestJSON-1467345796, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:08:35Z, description=, dns_domain=, id=3f800b06-45d0-44a2-94b5-eaa2adcba16d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1770279820, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40149, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2485, status=ACTIVE, subnets=['92c6c736-3393-4c50-a573-c57ccb4c9e3d'], tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:36Z, vlan_transparent=None, network_id=3f800b06-45d0-44a2-94b5-eaa2adcba16d, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['403ef325-843a-42e9-9412-a4f8fc546f92'], standard_attr_id=2508, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:37Z on network 3f800b06-45d0-44a2-94b5-eaa2adcba16d#033[00m Oct 5 06:08:39 localhost podman[335588]: 2025-10-05 10:08:39.728662337 +0000 UTC m=+0.064451101 container kill e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:39 localhost dnsmasq[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/addn_hosts - 1 addresses Oct 5 06:08:39 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/host Oct 5 06:08:39 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/opts Oct 5 06:08:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:39.752 272040 INFO neutron.agent.dhcp.agent [None req-4aff890b-e115-4386-a4b5-ce8049b776d8 - - - - - -] DHCP configuration for ports {'99fa7e57-6193-4dcc-bd61-7de722df087e'} is completed#033[00m Oct 5 06:08:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:39.998 272040 INFO neutron.agent.dhcp.agent [None req-aeb2bc92-509e-4cbb-9c8d-52ee8553ce44 - - - - - -] DHCP configuration for ports {'35b57481-ee3b-4858-83b8-caa0a97c2cf8'} is completed#033[00m Oct 5 06:08:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:08:40 localhost podman[335610]: 2025-10-05 10:08:40.151626563 +0000 UTC m=+0.066251629 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 06:08:40 localhost podman[335610]: 2025-10-05 10:08:40.163712078 +0000 UTC m=+0.078337134 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:08:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e170 do_prune osdmap full prune enabled Oct 5 06:08:40 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:08:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e171 e171: 6 total, 6 up, 6 in Oct 5 06:08:40 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e171: 6 total, 6 up, 6 in Oct 5 06:08:40 localhost nova_compute[297021]: 2025-10-05 10:08:40.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:40.940 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:40.942 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:40.944 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:40.945 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[82bf1cd1-5539-43ee-b7b8-12db314dba40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:41.168 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:37Z, description=, device_id=8c8332fd-7be0-40e4-b35d-40b803a095aa, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=35b57481-ee3b-4858-83b8-caa0a97c2cf8, ip_allocation=immediate, mac_address=fa:16:3e:57:72:b6, name=tempest-PortsTestJSON-1467345796, network_id=3f800b06-45d0-44a2-94b5-eaa2adcba16d, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['403ef325-843a-42e9-9412-a4f8fc546f92'], standard_attr_id=2508, status=ACTIVE, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:39Z on network 3f800b06-45d0-44a2-94b5-eaa2adcba16d#033[00m Oct 5 06:08:41 localhost dnsmasq[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/addn_hosts - 1 addresses Oct 5 06:08:41 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/host Oct 5 06:08:41 localhost podman[335646]: 2025-10-05 10:08:41.426689467 +0000 UTC m=+0.065008436 container kill e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:08:41 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/opts Oct 5 06:08:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:41.674 272040 INFO neutron.agent.dhcp.agent [None req-5762520c-4f6b-4cd2-aa5e-df8041301de7 - - - - - -] DHCP configuration for ports {'35b57481-ee3b-4858-83b8-caa0a97c2cf8'} is completed#033[00m Oct 5 06:08:41 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:41.732 2 INFO neutron.agent.securitygroups_rpc [None req-7e721a7e-d21f-4645-a45c-3d855438b9f0 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e171 do_prune osdmap full prune enabled Oct 5 06:08:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e172 e172: 6 total, 6 up, 6 in Oct 5 06:08:41 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e172: 6 total, 6 up, 6 in Oct 5 06:08:41 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:41.881 2 INFO neutron.agent.securitygroups_rpc [None req-46b5e596-709d-4f9e-9c8c-ea4b8685f500 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:42 localhost systemd[1]: tmp-crun.vOFeei.mount: Deactivated successfully. Oct 5 06:08:42 localhost dnsmasq[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/addn_hosts - 0 addresses Oct 5 06:08:42 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/host Oct 5 06:08:42 localhost dnsmasq-dhcp[335570]: read /var/lib/neutron/dhcp/3f800b06-45d0-44a2-94b5-eaa2adcba16d/opts Oct 5 06:08:42 localhost podman[335684]: 2025-10-05 10:08:42.184993797 +0000 UTC m=+0.084547111 container kill e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:08:42 localhost ovn_controller[157794]: 2025-10-05T10:08:42Z|00357|binding|INFO|Releasing lport f6666f61-0e91-4e1f-9f78-41f7259e7822 from this chassis (sb_readonly=0) Oct 5 06:08:42 localhost ovn_controller[157794]: 2025-10-05T10:08:42Z|00358|binding|INFO|Setting lport f6666f61-0e91-4e1f-9f78-41f7259e7822 down in Southbound Oct 5 06:08:42 localhost nova_compute[297021]: 2025-10-05 10:08:42.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:42 localhost kernel: device tapf6666f61-0e left promiscuous mode Oct 5 06:08:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:42.416 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-3f800b06-45d0-44a2-94b5-eaa2adcba16d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f800b06-45d0-44a2-94b5-eaa2adcba16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9ecbbf-51f6-4c1a-8657-b8df63f7737f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f6666f61-0e91-4e1f-9f78-41f7259e7822) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:42.418 163434 INFO neutron.agent.ovn.metadata.agent [-] Port f6666f61-0e91-4e1f-9f78-41f7259e7822 in datapath 3f800b06-45d0-44a2-94b5-eaa2adcba16d unbound from our chassis#033[00m Oct 5 06:08:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:42.420 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f800b06-45d0-44a2-94b5-eaa2adcba16d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:42.421 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[aef96829-aff3-43e1-859b-5ddb2d0fa046]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:42 localhost nova_compute[297021]: 2025-10-05 10:08:42.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:42 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:42.496 2 INFO neutron.agent.securitygroups_rpc [None req-7d93ff03-dd34-4397-8de2-2f769fd63ab7 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:42 localhost dnsmasq[335570]: exiting on receipt of SIGTERM Oct 5 06:08:42 localhost podman[335724]: 2025-10-05 10:08:42.910237599 +0000 UTC m=+0.064454762 container kill e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:08:42 localhost systemd[1]: libpod-e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a.scope: Deactivated successfully. Oct 5 06:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:08:42 localhost podman[335735]: 2025-10-05 10:08:42.978117712 +0000 UTC m=+0.052971274 container died e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:08:43 localhost podman[335735]: 2025-10-05 10:08:43.012292389 +0000 UTC m=+0.087145921 container cleanup e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:43 localhost systemd[1]: libpod-conmon-e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a.scope: Deactivated successfully. Oct 5 06:08:43 localhost podman[335743]: 2025-10-05 10:08:43.023654574 +0000 UTC m=+0.085918537 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, distribution-scope=public) Oct 5 06:08:43 localhost podman[335743]: 2025-10-05 10:08:43.066707421 +0000 UTC m=+0.128971384 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 06:08:43 localhost podman[335737]: 2025-10-05 10:08:43.078107397 +0000 UTC m=+0.141588544 container remove e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f800b06-45d0-44a2-94b5-eaa2adcba16d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:08:43 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:08:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:43.115 272040 INFO neutron.agent.dhcp.agent [None req-edcba94a-38f2-4234-8c33-8cc4939a400c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:43 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:43.324 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:43 localhost ovn_controller[157794]: 2025-10-05T10:08:43Z|00359|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:08:43 localhost nova_compute[297021]: 2025-10-05 10:08:43.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:43 localhost systemd[1]: tmp-crun.cww8fU.mount: Deactivated successfully. Oct 5 06:08:43 localhost systemd[1]: var-lib-containers-storage-overlay-28c12070a2958ae095ddc59eee7872dd5e2f9cdf63ce8689951fea9da3b46e25-merged.mount: Deactivated successfully. Oct 5 06:08:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1a56a47ba7bbb478b5e3c8abeb6621c1425bdd995b9716bdcb791bcbec9487a-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:43 localhost systemd[1]: run-netns-qdhcp\x2d3f800b06\x2d45d0\x2d44a2\x2d94b5\x2deaa2adcba16d.mount: Deactivated successfully. Oct 5 06:08:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:45.258 2 INFO neutron.agent.securitygroups_rpc [None req-e552e646-5615-4c88-85b4-014221181c72 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:45 localhost nova_compute[297021]: 2025-10-05 10:08:45.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:45.939 2 INFO neutron.agent.securitygroups_rpc [None req-aeaf9d2a-4214-4a38-954b-4933dbf990b2 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:08:46 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:46.621 272040 INFO neutron.agent.linux.ip_lib [None req-fac5b601-aa19-43c6-9ded-7062e2588f8e - - - - - -] Device tap5d3b717e-04 cannot be used as it has no MAC address#033[00m Oct 5 06:08:46 localhost systemd[1]: tmp-crun.MNah0G.mount: Deactivated successfully. Oct 5 06:08:46 localhost nova_compute[297021]: 2025-10-05 10:08:46.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:46 localhost podman[335787]: 2025-10-05 10:08:46.689532068 +0000 UTC m=+0.133229158 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:08:46 localhost kernel: device tap5d3b717e-04 entered promiscuous mode Oct 5 06:08:46 localhost NetworkManager[5981]: [1759658926.6952] manager: (tap5d3b717e-04): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Oct 5 06:08:46 localhost ovn_controller[157794]: 2025-10-05T10:08:46Z|00360|binding|INFO|Claiming lport 5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48 for this chassis. Oct 5 06:08:46 localhost ovn_controller[157794]: 2025-10-05T10:08:46Z|00361|binding|INFO|5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48: Claiming unknown Oct 5 06:08:46 localhost nova_compute[297021]: 2025-10-05 10:08:46.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:46 localhost systemd-udevd[335817]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:08:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:46.719 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3520bd-795e-496b-9bd8-63b98bafb741, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:46.721 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48 in datapath 6c5c636c-bc8a-429a-8f10-8f4508a77c3b bound to our chassis#033[00m Oct 5 06:08:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:46.725 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5a74b9de-50d6-4353-8e8b-6f847fc87d19 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:08:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:46.725 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:46.726 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[95722779-1b2b-4d7a-ba98-1a517bbd88ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:46 localhost podman[335787]: 2025-10-05 10:08:46.728865444 +0000 UTC m=+0.172562504 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost ovn_controller[157794]: 2025-10-05T10:08:46Z|00362|binding|INFO|Setting lport 5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48 ovn-installed in OVS Oct 5 06:08:46 localhost nova_compute[297021]: 2025-10-05 10:08:46.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:46 localhost ovn_controller[157794]: 2025-10-05T10:08:46Z|00363|binding|INFO|Setting lport 5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48 up in Southbound Oct 5 06:08:46 localhost nova_compute[297021]: 2025-10-05 10:08:46.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost journal[237931]: ethtool ioctl error on tap5d3b717e-04: No such device Oct 5 06:08:46 localhost nova_compute[297021]: 2025-10-05 10:08:46.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:46 localhost nova_compute[297021]: 2025-10-05 10:08:46.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:47 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:47.073 2 INFO neutron.agent.securitygroups_rpc [None req-dc69c701-7ca0-4311-bb6d-f3f426b0fdc0 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['72f8357d-4c2a-4c55-a9b5-4ba9e09e68d5']#033[00m Oct 5 06:08:47 localhost podman[335888]: Oct 5 06:08:47 localhost podman[335888]: 2025-10-05 10:08:47.717437516 +0000 UTC m=+0.089958846 container create 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:08:47 localhost systemd[1]: Started libpod-conmon-4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213.scope. Oct 5 06:08:47 localhost systemd[1]: tmp-crun.2jSo6Z.mount: Deactivated successfully. Oct 5 06:08:47 localhost podman[335888]: 2025-10-05 10:08:47.67626286 +0000 UTC m=+0.048784220 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:47 localhost systemd[1]: Started libcrun container. Oct 5 06:08:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e2070a2a284cbce2d23f683af17699eb56785e39f070a85b318bf6b9dd1313/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:47 localhost podman[335888]: 2025-10-05 10:08:47.810037422 +0000 UTC m=+0.182558742 container init 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:08:47 localhost podman[335888]: 2025-10-05 10:08:47.820258387 +0000 UTC m=+0.192779697 container start 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:47 localhost dnsmasq[335906]: started, version 2.85 cachesize 150 Oct 5 06:08:47 localhost dnsmasq[335906]: DNS service limited to local subnets Oct 5 06:08:47 localhost dnsmasq[335906]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:47 localhost dnsmasq[335906]: warning: no upstream servers configured Oct 5 06:08:47 localhost dnsmasq-dhcp[335906]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:47 localhost dnsmasq[335906]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 0 addresses Oct 5 06:08:47 localhost dnsmasq-dhcp[335906]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:47 localhost dnsmasq-dhcp[335906]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:47 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:47.880 272040 INFO neutron.agent.dhcp.agent [None req-cdf32881-7a49-4b79-ba58-70648290a899 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0c18fb28-083d-4132-9622-49193a1e40f4, ip_allocation=immediate, mac_address=fa:16:3e:d0:b9:a6, name=tempest-PortsTestJSON-585589288, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:54Z, description=, dns_domain=, id=6c5c636c-bc8a-429a-8f10-8f4508a77c3b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1784334432, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19434, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2235, status=ACTIVE, subnets=['0cd034ab-d3cb-4144-a9be-81180fa074bd'], tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:44Z, vlan_transparent=None, network_id=6c5c636c-bc8a-429a-8f10-8f4508a77c3b, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72f8357d-4c2a-4c55-a9b5-4ba9e09e68d5'], standard_attr_id=2569, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:46Z on network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b#033[00m Oct 5 06:08:48 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:48.014 272040 INFO neutron.agent.dhcp.agent [None req-8ce7d136-524a-448e-90cc-22360e611a31 - - - - - -] DHCP configuration for ports {'5b52ed09-1238-4fa6-9a9b-4538c79b4319', '405e4ec1-95c6-4d96-9868-4d6f8824ae0c'} is completed#033[00m Oct 5 06:08:48 localhost dnsmasq[335906]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 1 addresses Oct 5 06:08:48 localhost dnsmasq-dhcp[335906]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:48 localhost podman[335922]: 2025-10-05 10:08:48.171938278 +0000 UTC m=+0.061817630 container kill 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:08:48 localhost dnsmasq-dhcp[335906]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:48 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:48.394 272040 INFO neutron.agent.dhcp.agent [None req-f6d15524-ce7c-4822-b14d-34bac914d472 - - - - - -] DHCP configuration for ports {'0c18fb28-083d-4132-9622-49193a1e40f4'} is completed#033[00m Oct 5 06:08:48 localhost dnsmasq[335906]: exiting on receipt of SIGTERM Oct 5 06:08:48 localhost podman[335957]: 2025-10-05 10:08:48.628505936 +0000 UTC m=+0.064689077 container kill 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:08:48 localhost systemd[1]: libpod-4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213.scope: Deactivated successfully. Oct 5 06:08:48 localhost podman[335972]: 2025-10-05 10:08:48.704081385 +0000 UTC m=+0.058627245 container died 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:08:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:48 localhost systemd[1]: var-lib-containers-storage-overlay-22e2070a2a284cbce2d23f683af17699eb56785e39f070a85b318bf6b9dd1313-merged.mount: Deactivated successfully. Oct 5 06:08:48 localhost podman[335972]: 2025-10-05 10:08:48.740099933 +0000 UTC m=+0.094645753 container cleanup 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:08:48 localhost systemd[1]: libpod-conmon-4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213.scope: Deactivated successfully. Oct 5 06:08:48 localhost podman[335973]: 2025-10-05 10:08:48.780850907 +0000 UTC m=+0.130682560 container remove 4ecaea3db7b99cbd96db6622729402e8aa349bd73131bcd43a984a4ed6ca2213 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:08:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e172 do_prune osdmap full prune enabled Oct 5 06:08:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e173 e173: 6 total, 6 up, 6 in Oct 5 06:08:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e173: 6 total, 6 up, 6 in Oct 5 06:08:48 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:48.869 2 INFO neutron.agent.securitygroups_rpc [None req-97ecff30-885e-46f1-be36-64d2a7b05cf3 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:48.979 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3520bd-795e-496b-9bd8-63b98bafb741, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=405e4ec1-95c6-4d96-9868-4d6f8824ae0c) old=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:48.981 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 405e4ec1-95c6-4d96-9868-4d6f8824ae0c in datapath 6c5c636c-bc8a-429a-8f10-8f4508a77c3b updated#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:48.983 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5a74b9de-50d6-4353-8e8b-6f847fc87d19 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:48.984 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:48.985 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[edc06838-290b-4998-b94e-d8d585d9144a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:49 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:49.312 2 INFO neutron.agent.securitygroups_rpc [None req-d91fca24-9e84-4376-871e-8b0391de4b7d f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:49 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:49.808 272040 INFO neutron.agent.linux.ip_lib [None req-7836a6af-738f-4f2e-b736-07aa631ff262 - - - - - -] Device tap51ef4aa5-49 cannot be used as it has no MAC address#033[00m Oct 5 06:08:49 localhost nova_compute[297021]: 2025-10-05 10:08:49.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:49 localhost kernel: device tap51ef4aa5-49 entered promiscuous mode Oct 5 06:08:49 localhost nova_compute[297021]: 2025-10-05 10:08:49.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:49 localhost NetworkManager[5981]: [1759658929.8478] manager: (tap51ef4aa5-49): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Oct 5 06:08:49 localhost systemd-udevd[336021]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:08:49 localhost ovn_controller[157794]: 2025-10-05T10:08:49Z|00364|binding|INFO|Claiming lport 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 for this chassis. Oct 5 06:08:49 localhost ovn_controller[157794]: 2025-10-05T10:08:49Z|00365|binding|INFO|51ef4aa5-49b0-4c06-8223-999b05c8fcc3: Claiming unknown Oct 5 06:08:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:49.864 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c8daf35e79847329bde1c6cf0340477', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9e6832-0d64-4de3-813f-a8d270389d12, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=51ef4aa5-49b0-4c06-8223-999b05c8fcc3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:49.867 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 in datapath 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5 bound to our chassis#033[00m Oct 5 06:08:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:49.870 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port e85c27eb-eae6-44e2-b1f9-64e00e080b47 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:08:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:49.870 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:49.871 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[537320aa-fddb-4847-b77c-cf1854c33391]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:49 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:49.896 2 INFO neutron.agent.securitygroups_rpc [None req-441d5573-cc7b-44a8-a501-d1e820729695 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['72f8357d-4c2a-4c55-a9b5-4ba9e09e68d5', 'faf5b389-f9b1-4f45-9607-3142b5368a3b']#033[00m Oct 5 06:08:49 localhost ovn_controller[157794]: 2025-10-05T10:08:49Z|00366|binding|INFO|Setting lport 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 ovn-installed in OVS Oct 5 06:08:49 localhost ovn_controller[157794]: 2025-10-05T10:08:49Z|00367|binding|INFO|Setting lport 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 up in Southbound Oct 5 06:08:49 localhost nova_compute[297021]: 2025-10-05 10:08:49.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:49 localhost nova_compute[297021]: 2025-10-05 10:08:49.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:49 localhost nova_compute[297021]: 2025-10-05 10:08:49.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:50 localhost nova_compute[297021]: 2025-10-05 10:08:50.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:50 localhost nova_compute[297021]: 2025-10-05 10:08:50.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:50 localhost podman[336090]: Oct 5 06:08:50 localhost podman[336090]: 2025-10-05 10:08:50.621429304 +0000 UTC m=+0.100595612 container create 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:50 localhost systemd[1]: Started libpod-conmon-45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff.scope. Oct 5 06:08:50 localhost podman[336090]: 2025-10-05 10:08:50.577172546 +0000 UTC m=+0.056338874 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:50 localhost systemd[1]: tmp-crun.Ch83HA.mount: Deactivated successfully. Oct 5 06:08:50 localhost systemd[1]: Started libcrun container. Oct 5 06:08:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:08:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086d1db1fc15e235ff6e05392271d133ca44a94444691fd095a79b610b419696/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:50 localhost podman[336090]: 2025-10-05 10:08:50.776766354 +0000 UTC m=+0.255932672 container init 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:50 localhost podman[336090]: 2025-10-05 10:08:50.786020723 +0000 UTC m=+0.265187031 container start 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:50 localhost dnsmasq[336137]: started, version 2.85 cachesize 150 Oct 5 06:08:50 localhost dnsmasq[336137]: DNS service limited to local subnets Oct 5 06:08:50 localhost dnsmasq[336137]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:50 localhost dnsmasq[336137]: warning: no upstream servers configured Oct 5 06:08:50 localhost dnsmasq-dhcp[336137]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 5 06:08:50 localhost dnsmasq-dhcp[336137]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:50 localhost dnsmasq[336137]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 1 addresses Oct 5 06:08:50 localhost dnsmasq-dhcp[336137]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:50 localhost dnsmasq-dhcp[336137]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:50 localhost podman[336123]: 2025-10-05 10:08:50.831936736 +0000 UTC m=+0.106282695 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:08:50 localhost podman[336123]: 2025-10-05 10:08:50.841659147 +0000 UTC m=+0.116005136 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:08:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:50.846 272040 INFO neutron.agent.dhcp.agent [None req-5cdd35e0-0cd4-4d0a-ae76-ff3517436a48 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0c18fb28-083d-4132-9622-49193a1e40f4, ip_allocation=immediate, mac_address=fa:16:3e:d0:b9:a6, name=tempest-PortsTestJSON-874389845, network_id=6c5c636c-bc8a-429a-8f10-8f4508a77c3b, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['faf5b389-f9b1-4f45-9607-3142b5368a3b'], standard_attr_id=2569, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:49Z on network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b#033[00m Oct 5 06:08:50 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:50.853 272040 INFO oslo.privsep.daemon [None req-5cdd35e0-0cd4-4d0a-ae76-ff3517436a48 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmpm366prm0/privsep.sock']#033[00m Oct 5 06:08:50 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:08:50 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:50.873 2 INFO neutron.agent.securitygroups_rpc [None req-1ce17b82-de95-46e3-b5c3-60132d9e9999 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['faf5b389-f9b1-4f45-9607-3142b5368a3b']#033[00m Oct 5 06:08:50 localhost dnsmasq[336137]: exiting on receipt of SIGTERM Oct 5 06:08:50 localhost systemd[1]: libpod-45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff.scope: Deactivated successfully. Oct 5 06:08:50 localhost podman[336145]: 2025-10-05 10:08:50.90400187 +0000 UTC m=+0.075354003 container died 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:50 localhost podman[336145]: 2025-10-05 10:08:50.937575382 +0000 UTC m=+0.108927465 container cleanup 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:08:50 localhost podman[336169]: 2025-10-05 10:08:50.965676176 +0000 UTC m=+0.061816711 container cleanup 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:08:50 localhost systemd[1]: libpod-conmon-45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff.scope: Deactivated successfully. Oct 5 06:08:51 localhost podman[336185]: 2025-10-05 10:08:51.02913382 +0000 UTC m=+0.073588306 container remove 45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:08:51 localhost podman[336203]: Oct 5 06:08:51 localhost podman[336203]: 2025-10-05 10:08:51.123674509 +0000 UTC m=+0.076101564 container create 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:51 localhost systemd[1]: Started libpod-conmon-5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5.scope. Oct 5 06:08:51 localhost systemd[1]: Started libcrun container. Oct 5 06:08:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc69427e010beafa3544b0e3175c7cf9f78a61b7e81bfba29dbbdb6dbdbe89c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:51 localhost podman[336203]: 2025-10-05 10:08:51.091647058 +0000 UTC m=+0.044074123 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:51 localhost podman[336203]: 2025-10-05 10:08:51.195565649 +0000 UTC m=+0.147992714 container init 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:51 localhost podman[336203]: 2025-10-05 10:08:51.205086704 +0000 UTC m=+0.157513799 container start 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:51 localhost dnsmasq[336221]: started, version 2.85 cachesize 150 Oct 5 06:08:51 localhost dnsmasq[336221]: DNS service limited to local subnets Oct 5 06:08:51 localhost dnsmasq[336221]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:51 localhost dnsmasq[336221]: warning: no upstream servers configured Oct 5 06:08:51 localhost dnsmasq-dhcp[336221]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:51 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 0 addresses Oct 5 06:08:51 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:08:51 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.246 272040 INFO neutron.agent.dhcp.agent [None req-0b288234-dcdc-41ca-8c22-7ed3a5083d50 - - - - - -] DHCP configuration for ports {'5b52ed09-1238-4fa6-9a9b-4538c79b4319', '5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48', '0c18fb28-083d-4132-9622-49193a1e40f4', '405e4ec1-95c6-4d96-9868-4d6f8824ae0c'} is completed#033[00m Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.425 272040 INFO neutron.agent.dhcp.agent [None req-5f178f05-2058-4613-9244-5ad1f15975c5 - - - - - -] DHCP configuration for ports {'64d0ceab-9aa9-4306-b5f2-3cc69d86d1c3'} is completed#033[00m Oct 5 06:08:51 localhost podman[248506]: time="2025-10-05T10:08:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:08:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:51 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1712235024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:51 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1712235024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:51 localhost podman[248506]: @ - - [05/Oct/2025:10:08:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:08:51 localhost podman[248506]: @ - - [05/Oct/2025:10:08:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19852 "" "Go-http-client/1.1" Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.522 272040 INFO oslo.privsep.daemon [None req-5cdd35e0-0cd4-4d0a-ae76-ff3517436a48 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.379 336222 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.384 336222 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.388 336222 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.388 336222 INFO oslo.privsep.daemon [-] privsep daemon running as pid 336222#033[00m Oct 5 06:08:51 localhost systemd[1]: var-lib-containers-storage-overlay-086d1db1fc15e235ff6e05392271d133ca44a94444691fd095a79b610b419696-merged.mount: Deactivated successfully. Oct 5 06:08:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45f7687727ddd9b9587db969656cf72c7abd6c1b12c90f0717143b6bbe9574ff-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e173 do_prune osdmap full prune enabled Oct 5 06:08:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e174 e174: 6 total, 6 up, 6 in Oct 5 06:08:51 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e174: 6 total, 6 up, 6 in Oct 5 06:08:51 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:51.997 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:51Z, description=, device_id=7317d80c-2795-4c74-8928-447aa6ca9c1b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8bd3745f-fbc8-4d05-beb5-601b03fb507a, ip_allocation=immediate, mac_address=fa:16:3e:30:ec:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:08:46Z, description=, dns_domain=, id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-2145772512-network, port_security_enabled=True, project_id=1c8daf35e79847329bde1c6cf0340477, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44451, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2570, status=ACTIVE, subnets=['c1832f08-c926-4144-8fd6-8310e551b356'], tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:08:47Z, vlan_transparent=None, network_id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, port_security_enabled=False, project_id=1c8daf35e79847329bde1c6cf0340477, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2588, status=DOWN, tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:08:51Z on network 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5#033[00m Oct 5 06:08:52 localhost openstack_network_exporter[250601]: ERROR 10:08:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:08:52 localhost openstack_network_exporter[250601]: ERROR 10:08:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:52 localhost openstack_network_exporter[250601]: ERROR 10:08:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:08:52 localhost openstack_network_exporter[250601]: Oct 5 06:08:52 localhost openstack_network_exporter[250601]: ERROR 10:08:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:52 localhost openstack_network_exporter[250601]: ERROR 10:08:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:08:52 localhost openstack_network_exporter[250601]: Oct 5 06:08:52 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:52.072 2 INFO neutron.agent.securitygroups_rpc [None req-32c7cc31-7050-4c63-9938-69794efbcacf f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:52 localhost systemd[1]: tmp-crun.wWkXpe.mount: Deactivated successfully. Oct 5 06:08:52 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 1 addresses Oct 5 06:08:52 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:08:52 localhost podman[336243]: 2025-10-05 10:08:52.18897629 +0000 UTC m=+0.055237244 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:08:52 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:08:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:52.399 272040 INFO neutron.agent.dhcp.agent [None req-05821f4c-d8db-4dac-aebb-eb980cc92322 - - - - - -] DHCP configuration for ports {'8bd3745f-fbc8-4d05-beb5-601b03fb507a'} is completed#033[00m Oct 5 06:08:52 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:52.674 272040 WARNING neutron.agent.linux.dhcp [None req-5cdd35e0-0cd4-4d0a-ae76-ff3517436a48 - - - - - -] Could not release DHCP leases for these IP addresses after 3 tries: 10.100.0.9#033[00m Oct 5 06:08:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e174 do_prune osdmap full prune enabled Oct 5 06:08:52 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:52.921 2 INFO neutron.agent.securitygroups_rpc [None req-03b7ae7c-5799-41ca-93aa-636cb21bad68 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e175 e175: 6 total, 6 up, 6 in Oct 5 06:08:52 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e175: 6 total, 6 up, 6 in Oct 5 06:08:53 localhost podman[336293]: Oct 5 06:08:53 localhost podman[336293]: 2025-10-05 10:08:53.191979039 +0000 UTC m=+0.087744236 container create 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:08:53 localhost systemd[1]: Started libpod-conmon-9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b.scope. Oct 5 06:08:53 localhost systemd[1]: tmp-crun.rO2Lyx.mount: Deactivated successfully. Oct 5 06:08:53 localhost podman[336293]: 2025-10-05 10:08:53.151596326 +0000 UTC m=+0.047361563 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:53 localhost systemd[1]: Started libcrun container. Oct 5 06:08:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1716ff0e095db6cc8ace3192dafa33aeabad2e13e9b237f58f1514d22bff6b3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:53 localhost podman[336293]: 2025-10-05 10:08:53.28138743 +0000 UTC m=+0.177152627 container init 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:08:53 localhost podman[336293]: 2025-10-05 10:08:53.289478727 +0000 UTC m=+0.185243934 container start 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:08:53 localhost dnsmasq[336311]: started, version 2.85 cachesize 150 Oct 5 06:08:53 localhost dnsmasq[336311]: DNS service limited to local subnets Oct 5 06:08:53 localhost dnsmasq[336311]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:53 localhost dnsmasq[336311]: warning: no upstream servers configured Oct 5 06:08:53 localhost dnsmasq-dhcp[336311]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 5 06:08:53 localhost dnsmasq-dhcp[336311]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:53 localhost dnsmasq[336311]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 1 addresses Oct 5 06:08:53 localhost dnsmasq-dhcp[336311]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:53 localhost dnsmasq-dhcp[336311]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:53.532 272040 INFO neutron.agent.dhcp.agent [None req-1858cffa-c55b-4d50-abec-7c2acc4144b5 - - - - - -] DHCP configuration for ports {'0c18fb28-083d-4132-9622-49193a1e40f4'} is completed#033[00m Oct 5 06:08:53 localhost dnsmasq[336311]: exiting on receipt of SIGTERM Oct 5 06:08:53 localhost podman[336337]: 2025-10-05 10:08:53.738363539 +0000 UTC m=+0.059265122 container kill 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:08:53 localhost systemd[1]: libpod-9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b.scope: Deactivated successfully. Oct 5 06:08:53 localhost podman[336350]: 2025-10-05 10:08:53.80429586 +0000 UTC m=+0.051576817 container died 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:08:53 localhost podman[336350]: 2025-10-05 10:08:53.845806004 +0000 UTC m=+0.093086921 container cleanup 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:53 localhost systemd[1]: libpod-conmon-9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b.scope: Deactivated successfully. Oct 5 06:08:53 localhost podman[336352]: 2025-10-05 10:08:53.880483065 +0000 UTC m=+0.121715859 container remove 9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:08:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:53.893 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:51Z, description=, device_id=7317d80c-2795-4c74-8928-447aa6ca9c1b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8bd3745f-fbc8-4d05-beb5-601b03fb507a, ip_allocation=immediate, mac_address=fa:16:3e:30:ec:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:08:46Z, description=, dns_domain=, id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-2145772512-network, port_security_enabled=True, project_id=1c8daf35e79847329bde1c6cf0340477, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44451, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2570, status=ACTIVE, subnets=['c1832f08-c926-4144-8fd6-8310e551b356'], tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:08:47Z, vlan_transparent=None, network_id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, port_security_enabled=False, project_id=1c8daf35e79847329bde1c6cf0340477, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2588, status=DOWN, tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:08:51Z on network 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5#033[00m Oct 5 06:08:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e175 do_prune osdmap full prune enabled Oct 5 06:08:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e176 e176: 6 total, 6 up, 6 in Oct 5 06:08:53 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e176: 6 total, 6 up, 6 in Oct 5 06:08:54 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 1 addresses Oct 5 06:08:54 localhost podman[336398]: 2025-10-05 10:08:54.189349867 +0000 UTC m=+0.055446199 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:08:54 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:08:54 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:08:54 localhost systemd[1]: var-lib-containers-storage-overlay-1716ff0e095db6cc8ace3192dafa33aeabad2e13e9b237f58f1514d22bff6b3f-merged.mount: Deactivated successfully. Oct 5 06:08:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9029fd0b497472f030856c9fbc52d7de333e7f3f4ed6991673bd712a36fc255b-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:54 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:54.471 272040 INFO neutron.agent.dhcp.agent [None req-f4b8de31-c9b2-49cf-9b23-7e3d36b262ca - - - - - -] DHCP configuration for ports {'8bd3745f-fbc8-4d05-beb5-601b03fb507a'} is completed#033[00m Oct 5 06:08:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e176 do_prune osdmap full prune enabled Oct 5 06:08:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e177 e177: 6 total, 6 up, 6 in Oct 5 06:08:54 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e177: 6 total, 6 up, 6 in Oct 5 06:08:54 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e43: np0005471152.kbhlus(active, since 9m), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 06:08:55 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:55.094 2 INFO neutron.agent.securitygroups_rpc [None req-47502f13-ad77-4309-9317-34e0f28a5ba4 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['e41dff43-d69f-4ffb-9be8-bbcee95191da']#033[00m Oct 5 06:08:55 localhost podman[336466]: Oct 5 06:08:55 localhost podman[336466]: 2025-10-05 10:08:55.181501165 +0000 UTC m=+0.112277805 container create b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:08:55 localhost podman[336466]: 2025-10-05 10:08:55.125120271 +0000 UTC m=+0.055896951 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:55 localhost systemd[1]: Started libpod-conmon-b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18.scope. Oct 5 06:08:55 localhost systemd[1]: tmp-crun.LDlaVB.mount: Deactivated successfully. Oct 5 06:08:55 localhost systemd[1]: Started libcrun container. Oct 5 06:08:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a549f1f84a5474b5325301fb3b7ca142c9734559b52d1d228cddf67bc29d5f46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:55 localhost podman[336466]: 2025-10-05 10:08:55.287144272 +0000 UTC m=+0.217920922 container init b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:55 localhost podman[336466]: 2025-10-05 10:08:55.29824706 +0000 UTC m=+0.229023700 container start b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:08:55 localhost dnsmasq[336484]: started, version 2.85 cachesize 150 Oct 5 06:08:55 localhost dnsmasq[336484]: DNS service limited to local subnets Oct 5 06:08:55 localhost dnsmasq[336484]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:55 localhost dnsmasq[336484]: warning: no upstream servers configured Oct 5 06:08:55 localhost dnsmasq-dhcp[336484]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:55 localhost dnsmasq-dhcp[336484]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 5 06:08:55 localhost dnsmasq[336484]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 0 addresses Oct 5 06:08:55 localhost dnsmasq-dhcp[336484]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:55 localhost dnsmasq-dhcp[336484]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:55.359 272040 INFO neutron.agent.dhcp.agent [None req-54c1a05b-3b20-421c-a363-ab48170a045a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:54Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c924597e-d3db-495a-8ac1-9b4323a0115e, ip_allocation=immediate, mac_address=fa:16:3e:bb:65:9f, name=tempest-PortsTestJSON-1868612985, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:54Z, description=, dns_domain=, id=6c5c636c-bc8a-429a-8f10-8f4508a77c3b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1784334432, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19434, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2235, status=ACTIVE, subnets=['89f3ec2e-55a8-4868-bcd1-c247ebc47b50', 'b7e2045e-28ab-43a1-a7ef-8b5d6a7bf6b3'], tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:51Z, vlan_transparent=None, network_id=6c5c636c-bc8a-429a-8f10-8f4508a77c3b, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e41dff43-d69f-4ffb-9be8-bbcee95191da'], standard_attr_id=2605, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:54Z on network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b#033[00m Oct 5 06:08:55 localhost nova_compute[297021]: 2025-10-05 10:08:55.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:55 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:55.541 2 INFO neutron.agent.securitygroups_rpc [None req-91dbb4a3-e766-4b5e-bc7e-962722a958cb f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:55.569 272040 INFO neutron.agent.dhcp.agent [None req-2bd3adf6-86e8-4e74-ba8a-a1bc7f04c464 - - - - - -] DHCP configuration for ports {'5b52ed09-1238-4fa6-9a9b-4538c79b4319', '5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48', '405e4ec1-95c6-4d96-9868-4d6f8824ae0c'} is completed#033[00m Oct 5 06:08:55 localhost podman[336503]: 2025-10-05 10:08:55.656813077 +0000 UTC m=+0.070756211 container kill b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:08:55 localhost dnsmasq[336484]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 1 addresses Oct 5 06:08:55 localhost dnsmasq-dhcp[336484]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:55 localhost dnsmasq-dhcp[336484]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:55 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1561967010' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:55 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1561967010' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:55 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:55.945 272040 INFO neutron.agent.dhcp.agent [None req-89d7abf7-7a90-4bd8-8b37-e46f61972066 - - - - - -] DHCP configuration for ports {'c924597e-d3db-495a-8ac1-9b4323a0115e'} is completed#033[00m Oct 5 06:08:55 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:55.966 2 INFO neutron.agent.securitygroups_rpc [None req-35702abc-e8fd-459a-81a9-67f5d42d3960 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:56 localhost systemd[1]: tmp-crun.CaNxu8.mount: Deactivated successfully. Oct 5 06:08:56 localhost podman[336543]: 2025-10-05 10:08:56.655362506 +0000 UTC m=+0.061206383 container kill b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:08:56 localhost dnsmasq[336484]: exiting on receipt of SIGTERM Oct 5 06:08:56 localhost systemd[1]: libpod-b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18.scope: Deactivated successfully. Oct 5 06:08:56 localhost podman[336557]: 2025-10-05 10:08:56.732053896 +0000 UTC m=+0.064003750 container died b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:08:56 localhost systemd[1]: tmp-crun.co5w5B.mount: Deactivated successfully. Oct 5 06:08:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:08:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e177 do_prune osdmap full prune enabled Oct 5 06:08:56 localhost podman[336557]: 2025-10-05 10:08:56.769826169 +0000 UTC m=+0.101775993 container cleanup b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:08:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e178 e178: 6 total, 6 up, 6 in Oct 5 06:08:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e178: 6 total, 6 up, 6 in Oct 5 06:08:56 localhost systemd[1]: libpod-conmon-b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18.scope: Deactivated successfully. Oct 5 06:08:56 localhost podman[336559]: 2025-10-05 10:08:56.858899341 +0000 UTC m=+0.181556846 container remove b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:57 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/923623792' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:57 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/923623792' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:57 localhost systemd[1]: var-lib-containers-storage-overlay-a549f1f84a5474b5325301fb3b7ca142c9734559b52d1d228cddf67bc29d5f46-merged.mount: Deactivated successfully. Oct 5 06:08:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5afb4af80d5276241dcb4e50b0a91060c80febb783ba1ac83e10b46267ddb18-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:08:57 localhost podman[336587]: 2025-10-05 10:08:57.348479126 +0000 UTC m=+0.081523470 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:08:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:08:57 localhost podman[336587]: 2025-10-05 10:08:57.368784031 +0000 UTC m=+0.101828345 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 06:08:57 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:08:57 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:57.411 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3520bd-795e-496b-9bd8-63b98bafb741, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=405e4ec1-95c6-4d96-9868-4d6f8824ae0c) old=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:57.413 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 405e4ec1-95c6-4d96-9868-4d6f8824ae0c in datapath 6c5c636c-bc8a-429a-8f10-8f4508a77c3b updated#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:57.416 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5a74b9de-50d6-4353-8e8b-6f847fc87d19 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:57.416 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163429]: 2025-10-05 10:08:57.417 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc9ad89-1c24-4516-9854-a6b7eba052af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:57 localhost systemd[1]: tmp-crun.VCadQk.mount: Deactivated successfully. Oct 5 06:08:57 localhost podman[336606]: 2025-10-05 10:08:57.471061127 +0000 UTC m=+0.095732702 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:08:57 localhost podman[336606]: 2025-10-05 10:08:57.482643608 +0000 UTC m=+0.107315163 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 06:08:57 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:08:58 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:58.260 2 INFO neutron.agent.securitygroups_rpc [None req-375d4cef-b02d-4920-a219-c43790d6eb04 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['3f24eb1d-3619-4317-aa94-a0a6422fd556', '5494f8cd-e84c-4ce4-b27e-50351805d667', 'e41dff43-d69f-4ffb-9be8-bbcee95191da']#033[00m Oct 5 06:08:58 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:58.796 2 INFO neutron.agent.securitygroups_rpc [None req-8bc387b0-c1e0-4b39-8ca0-f92bc5e66ef6 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['3f24eb1d-3619-4317-aa94-a0a6422fd556', '5494f8cd-e84c-4ce4-b27e-50351805d667']#033[00m Oct 5 06:08:59 localhost podman[336674]: Oct 5 06:08:59 localhost podman[336674]: 2025-10-05 10:08:59.049191957 +0000 UTC m=+0.093608014 container create 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:08:59 localhost systemd[1]: Started libpod-conmon-43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b.scope. Oct 5 06:08:59 localhost systemd[1]: Started libcrun container. Oct 5 06:08:59 localhost podman[336674]: 2025-10-05 10:08:59.005681808 +0000 UTC m=+0.050097895 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:08:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281f438a51ed4974f5f2ace0b523a63e940721cd10002484ffad2655f360ca49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:08:59 localhost podman[336674]: 2025-10-05 10:08:59.11521669 +0000 UTC m=+0.159632737 container init 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:08:59 localhost podman[336674]: 2025-10-05 10:08:59.126432141 +0000 UTC m=+0.170848188 container start 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:08:59 localhost dnsmasq[336693]: started, version 2.85 cachesize 150 Oct 5 06:08:59 localhost dnsmasq[336693]: DNS service limited to local subnets Oct 5 06:08:59 localhost dnsmasq[336693]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:08:59 localhost dnsmasq[336693]: warning: no upstream servers configured Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: DHCP, static leases only on 10.100.0.32, lease time 1d Oct 5 06:08:59 localhost dnsmasq[336693]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 1 addresses Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:59.187 272040 INFO neutron.agent.dhcp.agent [None req-2bf8caa3-75be-48fe-ab6b-e2ee4680b685 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:54Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c924597e-d3db-495a-8ac1-9b4323a0115e, ip_allocation=immediate, mac_address=fa:16:3e:bb:65:9f, name=tempest-PortsTestJSON-1621227022, network_id=6c5c636c-bc8a-429a-8f10-8f4508a77c3b, port_security_enabled=True, project_id=2943591b4b454696b34524fb1ef8a7d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['3f24eb1d-3619-4317-aa94-a0a6422fd556', '5494f8cd-e84c-4ce4-b27e-50351805d667'], standard_attr_id=2605, status=DOWN, tags=[], tenant_id=2943591b4b454696b34524fb1ef8a7d5, updated_at=2025-10-05T10:08:58Z on network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b#033[00m Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: DHCPRELEASE(tap5d3b717e-04) 10.100.0.10 fa:16:3e:bb:65:9f Oct 5 06:08:59 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:08:59.532 272040 INFO neutron.agent.dhcp.agent [None req-afc5e1d4-9c1e-4c25-9aff-fddaa4726cb2 - - - - - -] DHCP configuration for ports {'5b52ed09-1238-4fa6-9a9b-4538c79b4319', '5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48', 'c924597e-d3db-495a-8ac1-9b4323a0115e', '405e4ec1-95c6-4d96-9868-4d6f8824ae0c'} is completed#033[00m Oct 5 06:08:59 localhost neutron_sriov_agent[264984]: 2025-10-05 10:08:59.673 2 INFO neutron.agent.securitygroups_rpc [None req-9686299d-cab6-47d3-9abe-3e81fe4f3e3c f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:59 localhost dnsmasq[336693]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 1 addresses Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:08:59 localhost dnsmasq-dhcp[336693]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:08:59 localhost podman[336710]: 2025-10-05 10:08:59.796042779 +0000 UTC m=+0.059494329 container kill 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:09:00 localhost systemd[1]: tmp-crun.xJa7ps.mount: Deactivated successfully. Oct 5 06:09:00 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:00.069 272040 INFO neutron.agent.dhcp.agent [None req-980533fb-5be8-4a8b-b8c7-2bcb3a034827 - - - - - -] DHCP configuration for ports {'c924597e-d3db-495a-8ac1-9b4323a0115e'} is completed#033[00m Oct 5 06:09:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:00 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2437027055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:00 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2437027055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:00 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:00.217 2 INFO neutron.agent.securitygroups_rpc [None req-bc48c6e7-0c03-4ca1-8c93-290080c26aa7 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:00 localhost dnsmasq[336693]: exiting on receipt of SIGTERM Oct 5 06:09:00 localhost podman[336748]: 2025-10-05 10:09:00.3033794 +0000 UTC m=+0.063889426 container kill 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:09:00 localhost systemd[1]: libpod-43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b.scope: Deactivated successfully. Oct 5 06:09:00 localhost podman[336762]: 2025-10-05 10:09:00.37637544 +0000 UTC m=+0.060466305 container died 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:09:00 localhost nova_compute[297021]: 2025-10-05 10:09:00.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:00 localhost podman[336762]: 2025-10-05 10:09:00.459765019 +0000 UTC m=+0.143855854 container cleanup 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:09:00 localhost systemd[1]: libpod-conmon-43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b.scope: Deactivated successfully. Oct 5 06:09:00 localhost podman[336764]: 2025-10-05 10:09:00.501110949 +0000 UTC m=+0.175324738 container remove 43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:09:00 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:00.847 2 INFO neutron.agent.securitygroups_rpc [None req-bfed7ab9-dd4e-4a63-bc1b-c6d9d4378152 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:09:01 localhost systemd[1]: var-lib-containers-storage-overlay-281f438a51ed4974f5f2ace0b523a63e940721cd10002484ffad2655f360ca49-merged.mount: Deactivated successfully. Oct 5 06:09:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43af80d938b1b3b4ee375807f35ac5cfe2ade081b1c33240e8dc90f8dd2b427b-userdata-shm.mount: Deactivated successfully. Oct 5 06:09:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:01.338 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 5a74b9de-50d6-4353-8e8b-6f847fc87d19 with type ""#033[00m Oct 5 06:09:01 localhost ovn_controller[157794]: 2025-10-05T10:09:01Z|00368|binding|INFO|Removing iface tap5d3b717e-04 ovn-installed in OVS Oct 5 06:09:01 localhost ovn_controller[157794]: 2025-10-05T10:09:01Z|00369|binding|INFO|Removing lport 5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48 ovn-installed in OVS Oct 5 06:09:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:01.340 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3520bd-795e-496b-9bd8-63b98bafb741, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:01 localhost nova_compute[297021]: 2025-10-05 10:09:01.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:01.343 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48 in datapath 6c5c636c-bc8a-429a-8f10-8f4508a77c3b unbound from our chassis#033[00m Oct 5 06:09:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:01.345 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:09:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:01.348 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f002b530-e29f-4600-a324-ba8e186f6312]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:09:01 localhost nova_compute[297021]: 2025-10-05 10:09:01.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:01 localhost podman[336844]: Oct 5 06:09:01 localhost podman[336844]: 2025-10-05 10:09:01.613202037 +0000 UTC m=+0.080543003 container create 582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:09:01 localhost systemd[1]: Started libpod-conmon-582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5.scope. Oct 5 06:09:01 localhost podman[336844]: 2025-10-05 10:09:01.574142808 +0000 UTC m=+0.041483804 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:09:01 localhost systemd[1]: tmp-crun.APAI91.mount: Deactivated successfully. Oct 5 06:09:01 localhost systemd[1]: Started libcrun container. Oct 5 06:09:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540fd340b1d371b350f333c4ee47d4c652bb90eb65c5670918b4d373f899a83f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:09:01 localhost podman[336844]: 2025-10-05 10:09:01.719427919 +0000 UTC m=+0.186768905 container init 582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:09:01 localhost podman[336844]: 2025-10-05 10:09:01.730365872 +0000 UTC m=+0.197706868 container start 582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:09:01 localhost dnsmasq[336862]: started, version 2.85 cachesize 150 Oct 5 06:09:01 localhost dnsmasq[336862]: DNS service limited to local subnets Oct 5 06:09:01 localhost dnsmasq[336862]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:09:01 localhost dnsmasq[336862]: warning: no upstream servers configured Oct 5 06:09:01 localhost dnsmasq-dhcp[336862]: DHCP, static leases only on 10.100.0.16, lease time 1d Oct 5 06:09:01 localhost dnsmasq[336862]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/addn_hosts - 0 addresses Oct 5 06:09:01 localhost dnsmasq-dhcp[336862]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/host Oct 5 06:09:01 localhost dnsmasq-dhcp[336862]: read /var/lib/neutron/dhcp/6c5c636c-bc8a-429a-8f10-8f4508a77c3b/opts Oct 5 06:09:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e178 do_prune osdmap full prune enabled Oct 5 06:09:01 localhost ovn_controller[157794]: 2025-10-05T10:09:01Z|00370|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:09:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e179 e179: 6 total, 6 up, 6 in Oct 5 06:09:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e179: 6 total, 6 up, 6 in Oct 5 06:09:01 localhost nova_compute[297021]: 2025-10-05 10:09:01.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:01 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:01.905 272040 INFO neutron.agent.dhcp.agent [None req-aa388eed-f5f3-4bd9-8da9-cac9a1a89197 - - - - - -] DHCP configuration for ports {'5b52ed09-1238-4fa6-9a9b-4538c79b4319', '5d3b717e-04fc-41b6-8eaa-a2f5dfefaa48', '405e4ec1-95c6-4d96-9868-4d6f8824ae0c'} is completed#033[00m Oct 5 06:09:02 localhost dnsmasq[336862]: exiting on receipt of SIGTERM Oct 5 06:09:02 localhost podman[336880]: 2025-10-05 10:09:02.027576402 +0000 UTC m=+0.076654039 container kill 582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:09:02 localhost systemd[1]: libpod-582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5.scope: Deactivated successfully. Oct 5 06:09:02 localhost podman[336896]: 2025-10-05 10:09:02.11016528 +0000 UTC m=+0.058154253 container died 582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:09:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5-userdata-shm.mount: Deactivated successfully. Oct 5 06:09:02 localhost systemd[1]: var-lib-containers-storage-overlay-540fd340b1d371b350f333c4ee47d4c652bb90eb65c5670918b4d373f899a83f-merged.mount: Deactivated successfully. Oct 5 06:09:02 localhost podman[336896]: 2025-10-05 10:09:02.214598964 +0000 UTC m=+0.162587897 container remove 582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6c5c636c-bc8a-429a-8f10-8f4508a77c3b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:09:02 localhost systemd[1]: libpod-conmon-582ef4daaa576aab3535e4eb052d9d62397ae00efd21c28a09377b1d23eddda5.scope: Deactivated successfully. Oct 5 06:09:02 localhost nova_compute[297021]: 2025-10-05 10:09:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:02 localhost kernel: device tap5d3b717e-04 left promiscuous mode Oct 5 06:09:02 localhost nova_compute[297021]: 2025-10-05 10:09:02.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:02 localhost systemd[1]: run-netns-qdhcp\x2d6c5c636c\x2dbc8a\x2d429a\x2d8f10\x2d8f4508a77c3b.mount: Deactivated successfully. Oct 5 06:09:02 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:02.277 272040 INFO neutron.agent.dhcp.agent [None req-5a683cbd-17c1-46bc-a1e2-e1c4aa0690e2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:02 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:02.278 272040 INFO neutron.agent.dhcp.agent [None req-5a683cbd-17c1-46bc-a1e2-e1c4aa0690e2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:02 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:02.279 272040 INFO neutron.agent.dhcp.agent [None req-5a683cbd-17c1-46bc-a1e2-e1c4aa0690e2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:02 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:02.279 272040 INFO neutron.agent.dhcp.agent [None req-5a683cbd-17c1-46bc-a1e2-e1c4aa0690e2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:09:03 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:03.653 2 INFO neutron.agent.securitygroups_rpc [None req-3960a7db-6f98-4f76-bedd-171182bec9e5 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:03 localhost podman[336921]: 2025-10-05 10:09:03.681662532 +0000 UTC m=+0.089988507 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 06:09:03 localhost podman[336921]: 2025-10-05 10:09:03.716896728 +0000 UTC m=+0.125222603 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:09:03 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:09:04 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:04.184 2 INFO neutron.agent.securitygroups_rpc [None req-12a6e729-e23a-4a91-b1d1-b640ebdb282f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:05 localhost nova_compute[297021]: 2025-10-05 10:09:05.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:05 localhost nova_compute[297021]: 2025-10-05 10:09:05.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e179 do_prune osdmap full prune enabled Oct 5 06:09:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e180 e180: 6 total, 6 up, 6 in Oct 5 06:09:06 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e180: 6 total, 6 up, 6 in Oct 5 06:09:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:06.812 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8:0:1:f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:06.814 163434 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:09:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:06.817 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:09:06 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:06.818 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[d9789a69-4e6a-4e5c-a22f-2a533cc4441c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:09:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e180 do_prune osdmap full prune enabled Oct 5 06:09:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e181 e181: 6 total, 6 up, 6 in Oct 5 06:09:07 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e181: 6 total, 6 up, 6 in Oct 5 06:09:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:07.213 2 INFO neutron.agent.securitygroups_rpc [None req-cba438c9-4daa-407e-a6d9-a2fa2c21cb42 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:09:07 localhost podman[336940]: 2025-10-05 10:09:07.683290269 +0000 UTC m=+0.087806708 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3) Oct 5 06:09:07 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:07.726 2 INFO neutron.agent.securitygroups_rpc [None req-db818d45-4c35-4d98-b7ca-8b1be2e037ff f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:07 localhost podman[336940]: 2025-10-05 10:09:07.752911569 +0000 UTC m=+0.157428028 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:09:07 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:09:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e181 do_prune osdmap full prune enabled Oct 5 06:09:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e182 e182: 6 total, 6 up, 6 in Oct 5 06:09:08 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e182: 6 total, 6 up, 6 in Oct 5 06:09:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e182 do_prune osdmap full prune enabled Oct 5 06:09:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e183 e183: 6 total, 6 up, 6 in Oct 5 06:09:09 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e183: 6 total, 6 up, 6 in Oct 5 06:09:10 localhost nova_compute[297021]: 2025-10-05 10:09:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:10 localhost nova_compute[297021]: 2025-10-05 10:09:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:10 localhost nova_compute[297021]: 2025-10-05 10:09:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:09:10 localhost nova_compute[297021]: 2025-10-05 10:09:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:10 localhost nova_compute[297021]: 2025-10-05 10:09:10.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:10 localhost nova_compute[297021]: 2025-10-05 10:09:10.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:09:10 localhost systemd[1]: tmp-crun.6cz8Jj.mount: Deactivated successfully. Oct 5 06:09:10 localhost podman[336965]: 2025-10-05 10:09:10.69204599 +0000 UTC m=+0.099950085 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 06:09:10 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:10.695 2 INFO neutron.agent.securitygroups_rpc [None req-bd6169c8-5f25-4b33-a37a-6328fb4320f6 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:10 localhost podman[336965]: 2025-10-05 10:09:10.702701106 +0000 UTC m=+0.110605191 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001) Oct 5 06:09:10 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:09:11 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:11.110 2 INFO neutron.agent.securitygroups_rpc [None req-19f2ba91-a8f2-4e9b-bcd7-482a8c4e1e9f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e44: np0005471152.kbhlus(active, since 9m), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 06:09:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e183 do_prune osdmap full prune enabled Oct 5 06:09:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e184 e184: 6 total, 6 up, 6 in Oct 5 06:09:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e184: 6 total, 6 up, 6 in Oct 5 06:09:12 localhost nova_compute[297021]: 2025-10-05 10:09:12.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e184 do_prune osdmap full prune enabled Oct 5 06:09:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e185 e185: 6 total, 6 up, 6 in Oct 5 06:09:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e185: 6 total, 6 up, 6 in Oct 5 06:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:09:13 localhost podman[336984]: 2025-10-05 10:09:13.676124547 +0000 UTC m=+0.082901576 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:09:13 localhost podman[336984]: 2025-10-05 10:09:13.695993821 +0000 UTC m=+0.102770850 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:09:13 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:09:14 localhost nova_compute[297021]: 2025-10-05 10:09:14.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:14 localhost nova_compute[297021]: 2025-10-05 10:09:14.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:14 localhost nova_compute[297021]: 2025-10-05 10:09:14.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:09:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e185 do_prune osdmap full prune enabled Oct 5 06:09:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e186 e186: 6 total, 6 up, 6 in Oct 5 06:09:15 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e186: 6 total, 6 up, 6 in Oct 5 06:09:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:15 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/151239763' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:15 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/151239763' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:15 localhost nova_compute[297021]: 2025-10-05 10:09:15.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:15 localhost nova_compute[297021]: 2025-10-05 10:09:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:15 localhost nova_compute[297021]: 2025-10-05 10:09:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:09:15 localhost nova_compute[297021]: 2025-10-05 10:09:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:15 localhost nova_compute[297021]: 2025-10-05 10:09:15.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:15 localhost nova_compute[297021]: 2025-10-05 10:09:15.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2838741561' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2838741561' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e186 do_prune osdmap full prune enabled Oct 5 06:09:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e187 e187: 6 total, 6 up, 6 in Oct 5 06:09:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e187: 6 total, 6 up, 6 in Oct 5 06:09:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:09:17 localhost podman[337004]: 2025-10-05 10:09:17.006656858 +0000 UTC m=+0.091154258 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:09:17 localhost podman[337004]: 2025-10-05 10:09:17.021858196 +0000 UTC m=+0.106355636 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:09:17 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:09:17 localhost nova_compute[297021]: 2025-10-05 10:09:17.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:18 localhost nova_compute[297021]: 2025-10-05 10:09:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e187 do_prune osdmap full prune enabled Oct 5 06:09:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e188 e188: 6 total, 6 up, 6 in Oct 5 06:09:18 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e188: 6 total, 6 up, 6 in Oct 5 06:09:19 localhost nova_compute[297021]: 2025-10-05 10:09:19.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e188 do_prune osdmap full prune enabled Oct 5 06:09:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e189 e189: 6 total, 6 up, 6 in Oct 5 06:09:19 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e189: 6 total, 6 up, 6 in Oct 5 06:09:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3029248791' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3029248791' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:20.470 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:09:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:20.471 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:09:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:20.472 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:20 localhost nova_compute[297021]: 2025-10-05 10:09:20.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:21 localhost podman[248506]: time="2025-10-05T10:09:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:09:21 localhost podman[248506]: @ - - [05/Oct/2025:10:09:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:09:21 localhost podman[248506]: @ - - [05/Oct/2025:10:09:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19853 "" "Go-http-client/1.1" Oct 5 06:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:09:21 localhost podman[337028]: 2025-10-05 10:09:21.68035505 +0000 UTC m=+0.090420900 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:09:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:21.704 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:21 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:21.705 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:09:21 localhost podman[337028]: 2025-10-05 10:09:21.716931692 +0000 UTC m=+0.126997522 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:09:21 localhost nova_compute[297021]: 2025-10-05 10:09:21.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:21 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:09:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e189 do_prune osdmap full prune enabled Oct 5 06:09:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e190 e190: 6 total, 6 up, 6 in Oct 5 06:09:21 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e190: 6 total, 6 up, 6 in Oct 5 06:09:22 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:22.016 2 INFO neutron.agent.securitygroups_rpc [None req-fad1303f-10e1-4f92-96c1-525ebd79c22d c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:22 localhost openstack_network_exporter[250601]: ERROR 10:09:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:22 localhost openstack_network_exporter[250601]: ERROR 10:09:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:22 localhost openstack_network_exporter[250601]: ERROR 10:09:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:09:22 localhost openstack_network_exporter[250601]: ERROR 10:09:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:09:22 localhost openstack_network_exporter[250601]: Oct 5 06:09:22 localhost openstack_network_exporter[250601]: ERROR 10:09:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:09:22 localhost openstack_network_exporter[250601]: Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.443 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.443 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.444 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.445 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.445 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:09:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:09:22 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1904682707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:09:22 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:22.926 2 INFO neutron.agent.securitygroups_rpc [None req-f5cc318a-721d-403f-b0ec-8fc7507ec8fd c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:22 localhost nova_compute[297021]: 2025-10-05 10:09:22.928 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.000 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.001 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:09:23 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:23.078 2 INFO neutron.agent.securitygroups_rpc [None req-c278cc7e-1b90-41ba-a679-990bec890d12 f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group rule updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.261 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.262 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11140MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.262 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.263 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.381 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.382 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.382 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:09:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/468968078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/468968078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.431 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:09:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:09:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2784972171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.870 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:09:23 localhost nova_compute[297021]: 2025-10-05 10:09:23.878 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:09:24 localhost nova_compute[297021]: 2025-10-05 10:09:24.019 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:09:24 localhost nova_compute[297021]: 2025-10-05 10:09:24.022 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:09:24 localhost nova_compute[297021]: 2025-10-05 10:09:24.023 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:09:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:24.183 2 INFO neutron.agent.securitygroups_rpc [None req-ed7161ea-821e-4844-93bf-e3373bfef5f6 f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group rule updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:09:24 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:24.326 2 INFO neutron.agent.securitygroups_rpc [None req-0d172463-a426-431e-b15d-cf3f700edad7 c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:25 localhost nova_compute[297021]: 2025-10-05 10:09:25.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.634787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965634836, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2474, "num_deletes": 262, "total_data_size": 3092157, "memory_usage": 3178168, "flush_reason": "Manual Compaction"} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965655759, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3027410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30231, "largest_seqno": 32704, "table_properties": {"data_size": 3016907, "index_size": 6624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24304, "raw_average_key_size": 21, "raw_value_size": 2995051, "raw_average_value_size": 2710, "num_data_blocks": 284, "num_entries": 1105, "num_filter_entries": 1105, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658819, "oldest_key_time": 1759658819, "file_creation_time": 1759658965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 21034 microseconds, and 7684 cpu microseconds. Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.655813) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3027410 bytes OK Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.655848) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.658870) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.658893) EVENT_LOG_v1 {"time_micros": 1759658965658886, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.658914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3081465, prev total WAL file size 3081465, number of live WAL files 2. Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.659969) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(2956KB)], [57(13MB)] Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965660018, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 17528471, "oldest_snapshot_seqno": -1} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 13101 keys, 16448813 bytes, temperature: kUnknown Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965755536, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16448813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16374511, "index_size": 40473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32773, "raw_key_size": 352535, "raw_average_key_size": 26, "raw_value_size": 16151878, "raw_average_value_size": 1232, "num_data_blocks": 1513, "num_entries": 13101, "num_filter_entries": 13101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759658965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.755866) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16448813 bytes Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.759362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.3 rd, 172.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 13.8 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(11.2) write-amplify(5.4) OK, records in: 13639, records dropped: 538 output_compression: NoCompression Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.759436) EVENT_LOG_v1 {"time_micros": 1759658965759385, "job": 34, "event": "compaction_finished", "compaction_time_micros": 95610, "compaction_time_cpu_micros": 49205, "output_level": 6, "num_output_files": 1, "total_output_size": 16448813, "num_input_records": 13639, "num_output_records": 13101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965760133, "job": 34, "event": "table_file_deletion", "file_number": 59} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965762604, "job": 34, "event": "table_file_deletion", "file_number": 57} Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.659892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.762678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.762684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.762687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.762690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:09:25.762693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1261466186' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1261466186' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:26 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:26.223 2 INFO neutron.agent.securitygroups_rpc [None req-6e3cb276-bf4f-4dda-9d60-a803c8ae9afd c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:26 localhost nova_compute[297021]: 2025-10-05 10:09:26.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e190 do_prune osdmap full prune enabled Oct 5 06:09:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e191 e191: 6 total, 6 up, 6 in Oct 5 06:09:26 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e191: 6 total, 6 up, 6 in Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.019 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.043 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.043 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.043 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.129 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.130 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.130 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:09:27 localhost nova_compute[297021]: 2025-10-05 10:09:27.130 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:09:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:09:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:09:27 localhost podman[337096]: 2025-10-05 10:09:27.684364328 +0000 UTC m=+0.086496723 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:09:27 localhost podman[337097]: 2025-10-05 10:09:27.733071695 +0000 UTC m=+0.135568381 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 5 06:09:27 localhost podman[337096]: 2025-10-05 10:09:27.750220166 +0000 UTC m=+0.152352581 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:09:27 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:09:27 localhost podman[337097]: 2025-10-05 10:09:27.769542095 +0000 UTC m=+0.172038800 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:09:27 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:09:28 localhost nova_compute[297021]: 2025-10-05 10:09:28.337 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:09:28 localhost nova_compute[297021]: 2025-10-05 10:09:28.357 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:09:28 localhost nova_compute[297021]: 2025-10-05 10:09:28.357 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:09:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:29.707 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:09:30 localhost nova_compute[297021]: 2025-10-05 10:09:30.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:30 localhost nova_compute[297021]: 2025-10-05 10:09:30.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:31 localhost systemd[1]: tmp-crun.u9zdE1.mount: Deactivated successfully. Oct 5 06:09:31 localhost podman[337239]: 2025-10-05 10:09:31.380994817 +0000 UTC m=+0.101896877 container exec 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 06:09:31 localhost podman[337239]: 2025-10-05 10:09:31.515697923 +0000 UTC m=+0.236600023 container exec_died 83cdbe412fcc5e2e6f269a36f3233c5f4cafa3d10d63aa17fea3a840aa9f6df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471150, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 06:09:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:09:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:09:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:09:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:09:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:09:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:09:32 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:32.440 2 INFO neutron.agent.securitygroups_rpc [req-6353351a-5432-4eff-bd51-af198ef2c8ab req-2f538ea6-c0e6-44bc-8666-e3e31a7af142 f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group member updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:09:32 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:32.474 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:09:32Z, description=, device_id=6262b619-f19e-41c4-bcf9-6d3c5c2db67b, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3d0766ac-65a0-44b0-bdd0-11cfdf4daa9b, ip_allocation=immediate, mac_address=fa:16:3e:07:30:03, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:08:46Z, description=, dns_domain=, id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-2145772512-network, port_security_enabled=True, project_id=1c8daf35e79847329bde1c6cf0340477, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44451, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2570, status=ACTIVE, subnets=['c1832f08-c926-4144-8fd6-8310e551b356'], tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:08:47Z, vlan_transparent=None, network_id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, port_security_enabled=True, project_id=1c8daf35e79847329bde1c6cf0340477, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d9126934-1777-40de-b348-3975c8158884'], standard_attr_id=2754, status=DOWN, tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:09:32Z on network 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5#033[00m Oct 5 06:09:32 localhost podman[337412]: 2025-10-05 10:09:32.721745244 +0000 UTC m=+0.069489167 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:09:32 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 2 addresses Oct 5 06:09:32 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:09:32 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 06:09:33 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:33.026 272040 INFO neutron.agent.dhcp.agent [None req-41f97e2c-f0fe-4e86-b9fe-13ab9bfd19ba - - - - - -] DHCP configuration for ports {'3d0766ac-65a0-44b0-bdd0-11cfdf4daa9b'} is completed#033[00m Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:09:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:33.389 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005471151.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:09:32Z, description=, device_id=6262b619-f19e-41c4-bcf9-6d3c5c2db67b, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-volumesbackupstest-instance-2072821730, extra_dhcp_opts=[], fixed_ips=[], id=3d0766ac-65a0-44b0-bdd0-11cfdf4daa9b, ip_allocation=immediate, mac_address=fa:16:3e:07:30:03, name=, network_id=041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, port_security_enabled=True, project_id=1c8daf35e79847329bde1c6cf0340477, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['d9126934-1777-40de-b348-3975c8158884'], standard_attr_id=2754, status=DOWN, tags=[], tenant_id=1c8daf35e79847329bde1c6cf0340477, updated_at=2025-10-05T10:09:33Z on network 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5#033[00m Oct 5 06:09:33 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 2 addresses Oct 5 06:09:33 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:09:33 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:09:33 localhost podman[337501]: 2025-10-05 10:09:33.593496409 +0000 UTC m=+0.060486954 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:09:33 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:33.754 272040 INFO neutron.agent.dhcp.agent [None req-5c373be0-3c46-49b6-bf6f-ae08da512f25 - - - - - -] DHCP configuration for ports {'3d0766ac-65a0-44b0-bdd0-11cfdf4daa9b'} is completed#033[00m Oct 5 06:09:34 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 06:09:34 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 06:09:34 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:34 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[308154]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 06:09:34 localhost ceph-mon[308154]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:09:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:09:34 localhost podman[337521]: 2025-10-05 10:09:34.675305264 +0000 UTC m=+0.081104969 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:09:34 localhost podman[337521]: 2025-10-05 10:09:34.709966925 +0000 UTC m=+0.115766670 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 06:09:34 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:09:34 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:34.905 2 INFO neutron.agent.securitygroups_rpc [None req-8e52407e-22e3-4460-b563-f3ed98e26817 c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:35 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:35.345 2 INFO neutron.agent.securitygroups_rpc [None req-5a937b19-1c21-40ca-9dd1-c32713d70666 c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:35 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:35.565 272040 INFO neutron.agent.linux.ip_lib [None req-b8cfc63b-f84c-4bb2-9546-b6888d0c5469 - - - - - -] Device tap1977f8ed-79 cannot be used as it has no MAC address#033[00m Oct 5 06:09:35 localhost nova_compute[297021]: 2025-10-05 10:09:35.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost kernel: device tap1977f8ed-79 entered promiscuous mode Oct 5 06:09:35 localhost ovn_controller[157794]: 2025-10-05T10:09:35Z|00371|binding|INFO|Claiming lport 1977f8ed-79cf-49cc-b077-5e1d919e8aee for this chassis. Oct 5 06:09:35 localhost ovn_controller[157794]: 2025-10-05T10:09:35Z|00372|binding|INFO|1977f8ed-79cf-49cc-b077-5e1d919e8aee: Claiming unknown Oct 5 06:09:35 localhost nova_compute[297021]: 2025-10-05 10:09:35.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost NetworkManager[5981]: [1759658975.6304] manager: (tap1977f8ed-79): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Oct 5 06:09:35 localhost systemd-udevd[337549]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:09:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:35.640 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-1a43b70a-ee07-46b4-9b5c-ce99a1259414', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a43b70a-ee07-46b4-9b5c-ce99a1259414', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '093f417e0eff4abba8c994a4ac741c61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01d979b2-0bd1-4cbe-b7d3-8bb242a29e0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=1977f8ed-79cf-49cc-b077-5e1d919e8aee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:35.642 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1977f8ed-79cf-49cc-b077-5e1d919e8aee in datapath 1a43b70a-ee07-46b4-9b5c-ce99a1259414 bound to our chassis#033[00m Oct 5 06:09:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:35.644 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port a3b762cd-e6da-4ef9-a208-f09c7cebe095 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:09:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:35.644 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a43b70a-ee07-46b4-9b5c-ce99a1259414, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:09:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:35.645 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8f28a19b-2e3a-490b-8600-cc7f3eb57a28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost nova_compute[297021]: 2025-10-05 10:09:35.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost ovn_controller[157794]: 2025-10-05T10:09:35Z|00373|binding|INFO|Setting lport 1977f8ed-79cf-49cc-b077-5e1d919e8aee ovn-installed in OVS Oct 5 06:09:35 localhost ovn_controller[157794]: 2025-10-05T10:09:35Z|00374|binding|INFO|Setting lport 1977f8ed-79cf-49cc-b077-5e1d919e8aee up in Southbound Oct 5 06:09:35 localhost nova_compute[297021]: 2025-10-05 10:09:35.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost journal[237931]: ethtool ioctl error on tap1977f8ed-79: No such device Oct 5 06:09:35 localhost nova_compute[297021]: 2025-10-05 10:09:35.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost nova_compute[297021]: 2025-10-05 10:09:35.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:35.767 2 INFO neutron.agent.securitygroups_rpc [None req-220e88cd-40cc-4db3-90a0-0428b7fdb7ba c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e191 do_prune osdmap full prune enabled Oct 5 06:09:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e192 e192: 6 total, 6 up, 6 in Oct 5 06:09:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e192: 6 total, 6 up, 6 in Oct 5 06:09:36 localhost podman[337620]: Oct 5 06:09:36 localhost podman[337620]: 2025-10-05 10:09:36.683981184 +0000 UTC m=+0.132877128 container create a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:09:36 localhost podman[337620]: 2025-10-05 10:09:36.605105927 +0000 UTC m=+0.054001911 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:09:36 localhost systemd[1]: Started libpod-conmon-a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26.scope. Oct 5 06:09:36 localhost systemd[1]: Started libcrun container. Oct 5 06:09:36 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:36.741 2 INFO neutron.agent.securitygroups_rpc [None req-0591fb42-029f-4e86-b8af-823ac9318d65 c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcd4c3368cf33f90da89f2e592cedbcffb8f88b46b13bd1de549a0c08d749cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:09:36 localhost podman[337620]: 2025-10-05 10:09:36.756177183 +0000 UTC m=+0.205073127 container init a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:09:36 localhost podman[337620]: 2025-10-05 10:09:36.770875497 +0000 UTC m=+0.219771441 container start a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 06:09:36 localhost dnsmasq[337638]: started, version 2.85 cachesize 150 Oct 5 06:09:36 localhost dnsmasq[337638]: DNS service limited to local subnets Oct 5 06:09:36 localhost dnsmasq[337638]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:09:36 localhost dnsmasq[337638]: warning: no upstream servers configured Oct 5 06:09:36 localhost dnsmasq-dhcp[337638]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:09:36 localhost dnsmasq[337638]: read /var/lib/neutron/dhcp/1a43b70a-ee07-46b4-9b5c-ce99a1259414/addn_hosts - 0 addresses Oct 5 06:09:36 localhost dnsmasq-dhcp[337638]: read /var/lib/neutron/dhcp/1a43b70a-ee07-46b4-9b5c-ce99a1259414/host Oct 5 06:09:36 localhost dnsmasq-dhcp[337638]: read /var/lib/neutron/dhcp/1a43b70a-ee07-46b4-9b5c-ce99a1259414/opts Oct 5 06:09:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:36.841 272040 INFO neutron.agent.dhcp.agent [None req-1288e7bb-be9c-4720-9e55-e7aea28d364d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:09:34Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b813979a-fef8-488c-a9b1-5b16b4857314, ip_allocation=immediate, mac_address=fa:16:3e:60:d5:6c, name=tempest-ExtraDHCPOptionsTestJSON-341005753, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:09:33Z, description=, dns_domain=, id=1a43b70a-ee07-46b4-9b5c-ce99a1259414, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-342665755, port_security_enabled=True, project_id=093f417e0eff4abba8c994a4ac741c61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49713, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2760, status=ACTIVE, subnets=['45256212-b02d-428b-9f5b-c5e4248b23e2'], tags=[], tenant_id=093f417e0eff4abba8c994a4ac741c61, updated_at=2025-10-05T10:09:33Z, vlan_transparent=None, network_id=1a43b70a-ee07-46b4-9b5c-ce99a1259414, port_security_enabled=True, project_id=093f417e0eff4abba8c994a4ac741c61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d6e72ece-f511-4085-9183-e8d7395d0930'], standard_attr_id=2765, status=DOWN, tags=[], tenant_id=093f417e0eff4abba8c994a4ac741c61, updated_at=2025-10-05T10:09:34Z on network 1a43b70a-ee07-46b4-9b5c-ce99a1259414#033[00m Oct 5 06:09:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:09:36 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:36.931 272040 INFO neutron.agent.dhcp.agent [None req-2f358c72-611b-4318-8791-c65182c16b01 - - - - - -] DHCP configuration for ports {'47dcf61b-164e-4f47-86ef-3f926550c25f'} is completed#033[00m Oct 5 06:09:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:37 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3487502153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:37 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3487502153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:37 localhost dnsmasq[337638]: read /var/lib/neutron/dhcp/1a43b70a-ee07-46b4-9b5c-ce99a1259414/addn_hosts - 1 addresses Oct 5 06:09:37 localhost podman[337656]: 2025-10-05 10:09:37.070372978 +0000 UTC m=+0.060525025 container kill a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:09:37 localhost dnsmasq-dhcp[337638]: read /var/lib/neutron/dhcp/1a43b70a-ee07-46b4-9b5c-ce99a1259414/host Oct 5 06:09:37 localhost dnsmasq-dhcp[337638]: read /var/lib/neutron/dhcp/1a43b70a-ee07-46b4-9b5c-ce99a1259414/opts Oct 5 06:09:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:37.258 272040 INFO neutron.agent.dhcp.agent [None req-56de9a9b-6aeb-411f-a724-f4a47075922e - - - - - -] DHCP configuration for ports {'b813979a-fef8-488c-a9b1-5b16b4857314'} is completed#033[00m Oct 5 06:09:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e192 do_prune osdmap full prune enabled Oct 5 06:09:37 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e193 e193: 6 total, 6 up, 6 in Oct 5 06:09:37 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e193: 6 total, 6 up, 6 in Oct 5 06:09:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:37.418 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a3b762cd-e6da-4ef9-a208-f09c7cebe095 with type ""#033[00m Oct 5 06:09:37 localhost ovn_controller[157794]: 2025-10-05T10:09:37Z|00375|binding|INFO|Removing iface tap1977f8ed-79 ovn-installed in OVS Oct 5 06:09:37 localhost ovn_controller[157794]: 2025-10-05T10:09:37Z|00376|binding|INFO|Removing lport 1977f8ed-79cf-49cc-b077-5e1d919e8aee ovn-installed in OVS Oct 5 06:09:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:37.420 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-1a43b70a-ee07-46b4-9b5c-ce99a1259414', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a43b70a-ee07-46b4-9b5c-ce99a1259414', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '093f417e0eff4abba8c994a4ac741c61', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01d979b2-0bd1-4cbe-b7d3-8bb242a29e0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=1977f8ed-79cf-49cc-b077-5e1d919e8aee) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:37 localhost nova_compute[297021]: 2025-10-05 10:09:37.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:37.422 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 1977f8ed-79cf-49cc-b077-5e1d919e8aee in datapath 1a43b70a-ee07-46b4-9b5c-ce99a1259414 unbound from our chassis#033[00m Oct 5 06:09:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:37.425 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a43b70a-ee07-46b4-9b5c-ce99a1259414, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:09:37 localhost ovn_metadata_agent[163429]: 2025-10-05 10:09:37.426 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f8ddba-2054-4809-8de2-6ab9d4283668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:09:37 localhost nova_compute[297021]: 2025-10-05 10:09:37.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:37 localhost dnsmasq[337638]: exiting on receipt of SIGTERM Oct 5 06:09:37 localhost podman[337693]: 2025-10-05 10:09:37.505614435 +0000 UTC m=+0.068990684 container kill a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:09:37 localhost systemd[1]: libpod-a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26.scope: Deactivated successfully. Oct 5 06:09:37 localhost podman[337708]: 2025-10-05 10:09:37.59151293 +0000 UTC m=+0.068519150 container died a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:09:37 localhost podman[337708]: 2025-10-05 10:09:37.62388415 +0000 UTC m=+0.100890350 container cleanup a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:09:37 localhost systemd[1]: libpod-conmon-a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26.scope: Deactivated successfully. Oct 5 06:09:37 localhost podman[337709]: 2025-10-05 10:09:37.667442899 +0000 UTC m=+0.138344985 container remove a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a43b70a-ee07-46b4-9b5c-ce99a1259414, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:09:37 localhost kernel: device tap1977f8ed-79 left promiscuous mode Oct 5 06:09:37 localhost nova_compute[297021]: 2025-10-05 10:09:37.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:37 localhost systemd[1]: var-lib-containers-storage-overlay-bbcd4c3368cf33f90da89f2e592cedbcffb8f88b46b13bd1de549a0c08d749cc-merged.mount: Deactivated successfully. Oct 5 06:09:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a243aa475631a145588885e2bf51c3d650c14ca3e518feca2beb9e7f1a46ce26-userdata-shm.mount: Deactivated successfully. Oct 5 06:09:37 localhost nova_compute[297021]: 2025-10-05 10:09:37.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:37.712 272040 INFO neutron.agent.dhcp.agent [None req-10b39b6f-e957-4c9d-b7d5-92c0da666d7c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:37 localhost systemd[1]: run-netns-qdhcp\x2d1a43b70a\x2dee07\x2d46b4\x2d9b5c\x2dce99a1259414.mount: Deactivated successfully. Oct 5 06:09:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:37.713 272040 INFO neutron.agent.dhcp.agent [None req-10b39b6f-e957-4c9d-b7d5-92c0da666d7c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:37.714 272040 INFO neutron.agent.dhcp.agent [None req-10b39b6f-e957-4c9d-b7d5-92c0da666d7c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:37.715 272040 INFO neutron.agent.dhcp.agent [None req-10b39b6f-e957-4c9d-b7d5-92c0da666d7c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:37 localhost ovn_controller[157794]: 2025-10-05T10:09:37Z|00377|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:09:37 localhost nova_compute[297021]: 2025-10-05 10:09:37.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:09:38 localhost podman[337739]: 2025-10-05 10:09:38.680627411 +0000 UTC m=+0.082107995 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:09:38 localhost podman[337739]: 2025-10-05 10:09:38.744935398 +0000 UTC m=+0.146416002 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller) Oct 5 06:09:38 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:09:40 localhost nova_compute[297021]: 2025-10-05 10:09:40.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:40 localhost nova_compute[297021]: 2025-10-05 10:09:40.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:09:41 localhost podman[337766]: 2025-10-05 10:09:41.670804234 +0000 UTC m=+0.083826321 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:09:41 localhost podman[337766]: 2025-10-05 10:09:41.685806787 +0000 UTC m=+0.098828844 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible) Oct 5 06:09:41 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:09:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:09:41 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1448621554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:09:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e193 do_prune osdmap full prune enabled Oct 5 06:09:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e194 e194: 6 total, 6 up, 6 in Oct 5 06:09:43 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e194: 6 total, 6 up, 6 in Oct 5 06:09:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e194 do_prune osdmap full prune enabled Oct 5 06:09:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e195 e195: 6 total, 6 up, 6 in Oct 5 06:09:44 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e195: 6 total, 6 up, 6 in Oct 5 06:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:09:44 localhost podman[337785]: 2025-10-05 10:09:44.678337571 +0000 UTC m=+0.087357806 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64) Oct 5 06:09:44 localhost podman[337785]: 2025-10-05 10:09:44.696794127 +0000 UTC m=+0.105814362 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41) Oct 5 06:09:44 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:09:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e195 do_prune osdmap full prune enabled Oct 5 06:09:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e196 e196: 6 total, 6 up, 6 in Oct 5 06:09:45 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e196: 6 total, 6 up, 6 in Oct 5 06:09:45 localhost nova_compute[297021]: 2025-10-05 10:09:45.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:45 localhost nova_compute[297021]: 2025-10-05 10:09:45.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:45.727 2 INFO neutron.agent.securitygroups_rpc [None req-e82370c2-2ab3-4aa4-b2e0-b7050fb43aea 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:45 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:45.824 2 INFO neutron.agent.securitygroups_rpc [None req-e82370c2-2ab3-4aa4-b2e0-b7050fb43aea 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:46.237 2 INFO neutron.agent.securitygroups_rpc [None req-a2bafd38-32f2-4fe5-bbc1-171c22c32945 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:46 localhost neutron_sriov_agent[264984]: 2025-10-05 10:09:46.524 2 INFO neutron.agent.securitygroups_rpc [None req-171beb7e-4bcd-41f6-af20-b5b34448b785 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e196 do_prune osdmap full prune enabled Oct 5 06:09:46 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:46.547 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e197 e197: 6 total, 6 up, 6 in Oct 5 06:09:46 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e197: 6 total, 6 up, 6 in Oct 5 06:09:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:09:47 localhost podman[337806]: 2025-10-05 10:09:47.696537666 +0000 UTC m=+0.103769937 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:09:47 localhost podman[337806]: 2025-10-05 10:09:47.704773697 +0000 UTC m=+0.112005998 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:09:47 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:09:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e197 do_prune osdmap full prune enabled Oct 5 06:09:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e198 e198: 6 total, 6 up, 6 in Oct 5 06:09:47 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e198: 6 total, 6 up, 6 in Oct 5 06:09:48 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Oct 5 06:09:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e198 do_prune osdmap full prune enabled Oct 5 06:09:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e199 e199: 6 total, 6 up, 6 in Oct 5 06:09:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e199: 6 total, 6 up, 6 in Oct 5 06:09:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e199 do_prune osdmap full prune enabled Oct 5 06:09:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e200 e200: 6 total, 6 up, 6 in Oct 5 06:09:49 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e200: 6 total, 6 up, 6 in Oct 5 06:09:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:49 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3824568785' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:49 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3824568785' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:50 localhost nova_compute[297021]: 2025-10-05 10:09:50.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:50 localhost nova_compute[297021]: 2025-10-05 10:09:50.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:50 localhost nova_compute[297021]: 2025-10-05 10:09:50.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:09:50 localhost nova_compute[297021]: 2025-10-05 10:09:50.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:50 localhost nova_compute[297021]: 2025-10-05 10:09:50.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:50 localhost nova_compute[297021]: 2025-10-05 10:09:50.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:51 localhost podman[248506]: time="2025-10-05T10:09:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:09:51 localhost podman[248506]: @ - - [05/Oct/2025:10:09:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:09:51 localhost podman[248506]: @ - - [05/Oct/2025:10:09:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19863 "" "Go-http-client/1.1" Oct 5 06:09:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e200 do_prune osdmap full prune enabled Oct 5 06:09:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e201 e201: 6 total, 6 up, 6 in Oct 5 06:09:51 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e201: 6 total, 6 up, 6 in Oct 5 06:09:52 localhost openstack_network_exporter[250601]: ERROR 10:09:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:09:52 localhost openstack_network_exporter[250601]: ERROR 10:09:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:52 localhost openstack_network_exporter[250601]: ERROR 10:09:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:52 localhost openstack_network_exporter[250601]: ERROR 10:09:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:09:52 localhost openstack_network_exporter[250601]: Oct 5 06:09:52 localhost openstack_network_exporter[250601]: ERROR 10:09:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:09:52 localhost openstack_network_exporter[250601]: Oct 5 06:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:09:52 localhost podman[337830]: 2025-10-05 10:09:52.666152323 +0000 UTC m=+0.077878001 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:09:52 localhost podman[337830]: 2025-10-05 10:09:52.674654131 +0000 UTC m=+0.086379789 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:09:52 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:09:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e201 do_prune osdmap full prune enabled Oct 5 06:09:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e202 e202: 6 total, 6 up, 6 in Oct 5 06:09:52 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e202: 6 total, 6 up, 6 in Oct 5 06:09:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:09:53.637 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e202 do_prune osdmap full prune enabled Oct 5 06:09:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e203 e203: 6 total, 6 up, 6 in Oct 5 06:09:54 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e203: 6 total, 6 up, 6 in Oct 5 06:09:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e203 do_prune osdmap full prune enabled Oct 5 06:09:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e204 e204: 6 total, 6 up, 6 in Oct 5 06:09:55 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e204: 6 total, 6 up, 6 in Oct 5 06:09:55 localhost nova_compute[297021]: 2025-10-05 10:09:55.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:55 localhost nova_compute[297021]: 2025-10-05 10:09:55.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:09:55 localhost nova_compute[297021]: 2025-10-05 10:09:55.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:09:55 localhost nova_compute[297021]: 2025-10-05 10:09:55.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:55 localhost nova_compute[297021]: 2025-10-05 10:09:55.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:55 localhost nova_compute[297021]: 2025-10-05 10:09:55.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:09:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e204 do_prune osdmap full prune enabled Oct 5 06:09:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e205 e205: 6 total, 6 up, 6 in Oct 5 06:09:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e205: 6 total, 6 up, 6 in Oct 5 06:09:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e205 do_prune osdmap full prune enabled Oct 5 06:09:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e206 e206: 6 total, 6 up, 6 in Oct 5 06:09:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e206: 6 total, 6 up, 6 in Oct 5 06:09:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:09:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:09:58 localhost podman[337853]: 2025-10-05 10:09:58.671964249 +0000 UTC m=+0.079517026 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:09:58 localhost podman[337853]: 2025-10-05 10:09:58.712851167 +0000 UTC m=+0.120403934 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:09:58 localhost systemd[1]: tmp-crun.8elhJ2.mount: Deactivated successfully. Oct 5 06:09:58 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:09:58 localhost podman[337854]: 2025-10-05 10:09:58.734802307 +0000 UTC m=+0.139209629 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd) Oct 5 06:09:58 localhost podman[337854]: 2025-10-05 10:09:58.750795815 +0000 UTC m=+0.155203127 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:09:58 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:10:00 localhost ceph-mon[308154]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 5 06:10:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e206 do_prune osdmap full prune enabled Oct 5 06:10:00 localhost ceph-mon[308154]: overall HEALTH_OK Oct 5 06:10:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e207 e207: 6 total, 6 up, 6 in Oct 5 06:10:00 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e207: 6 total, 6 up, 6 in Oct 5 06:10:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:00 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/950238563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:00 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:00 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/950238563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:00 localhost nova_compute[297021]: 2025-10-05 10:10:00.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:00 localhost nova_compute[297021]: 2025-10-05 10:10:00.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e207 do_prune osdmap full prune enabled Oct 5 06:10:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e208 e208: 6 total, 6 up, 6 in Oct 5 06:10:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e208: 6 total, 6 up, 6 in Oct 5 06:10:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:03 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2746516474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:03 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:03 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2746516474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:10:05 localhost podman[337890]: 2025-10-05 10:10:05.674204075 +0000 UTC m=+0.085257850 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:05 localhost podman[337890]: 2025-10-05 10:10:05.70714649 +0000 UTC m=+0.118200315 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:10:05 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:10:05 localhost nova_compute[297021]: 2025-10-05 10:10:05.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:05 localhost nova_compute[297021]: 2025-10-05 10:10:05.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:05 localhost nova_compute[297021]: 2025-10-05 10:10:05.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:10:05 localhost nova_compute[297021]: 2025-10-05 10:10:05.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:05 localhost nova_compute[297021]: 2025-10-05 10:10:05.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:05 localhost nova_compute[297021]: 2025-10-05 10:10:05.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e208 do_prune osdmap full prune enabled Oct 5 06:10:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e209 e209: 6 total, 6 up, 6 in Oct 5 06:10:06 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e209: 6 total, 6 up, 6 in Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.853585) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006853642, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1058, "num_deletes": 266, "total_data_size": 1028095, "memory_usage": 1048760, "flush_reason": "Manual Compaction"} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006859846, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1010148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32705, "largest_seqno": 33762, "table_properties": {"data_size": 1004870, "index_size": 2685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12715, "raw_average_key_size": 21, "raw_value_size": 993825, "raw_average_value_size": 1670, "num_data_blocks": 111, "num_entries": 595, "num_filter_entries": 595, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658966, "oldest_key_time": 1759658966, "file_creation_time": 1759659006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 6302 microseconds, and 3506 cpu microseconds. Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.859892) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1010148 bytes OK Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.859915) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.861764) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.861787) EVENT_LOG_v1 {"time_micros": 1759659006861780, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.861812) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1022775, prev total WAL file size 1022775, number of live WAL files 2. Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.863351) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323635' seq:72057594037927935, type:22 .. '6C6F676D0034353137' seq:0, type:0; will stop at (end) Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(986KB)], [60(15MB)] Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006863423, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 17458961, "oldest_snapshot_seqno": -1} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 13143 keys, 16756714 bytes, temperature: kUnknown Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006939262, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 16756714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16682138, "index_size": 40619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 354842, "raw_average_key_size": 26, "raw_value_size": 16458710, "raw_average_value_size": 1252, "num_data_blocks": 1505, "num_entries": 13143, "num_filter_entries": 13143, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759659006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.939610) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 16756714 bytes Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.941093) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.9 rd, 220.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 15.7 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(33.9) write-amplify(16.6) OK, records in: 13696, records dropped: 553 output_compression: NoCompression Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.941122) EVENT_LOG_v1 {"time_micros": 1759659006941110, "job": 36, "event": "compaction_finished", "compaction_time_micros": 75944, "compaction_time_cpu_micros": 47660, "output_level": 6, "num_output_files": 1, "total_output_size": 16756714, "num_input_records": 13696, "num_output_records": 13143, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006941377, "job": 36, "event": "table_file_deletion", "file_number": 62} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006943756, "job": 36, "event": "table_file_deletion", "file_number": 60} Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.863264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.943841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.943849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.943852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.943855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:06 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:10:06.943858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:10:09 localhost podman[337908]: 2025-10-05 10:10:09.669184697 +0000 UTC m=+0.078707924 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:10:09 localhost podman[337908]: 2025-10-05 10:10:09.697764934 +0000 UTC m=+0.107288121 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller) Oct 5 06:10:09 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:10:10 localhost nova_compute[297021]: 2025-10-05 10:10:10.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:10:12 localhost podman[337935]: 2025-10-05 10:10:12.614464233 +0000 UTC m=+0.075085756 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 06:10:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:12.624 272040 INFO neutron.agent.linux.ip_lib [None req-5cf40161-3dac-4ac3-8d6c-d4946ea8fdeb - - - - - -] Device tapd715c47f-f3 cannot be used as it has no MAC address#033[00m Oct 5 06:10:12 localhost podman[337935]: 2025-10-05 10:10:12.695366916 +0000 UTC m=+0.155988459 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:10:12 localhost nova_compute[297021]: 2025-10-05 10:10:12.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:12 localhost kernel: device tapd715c47f-f3 entered promiscuous mode Oct 5 06:10:12 localhost ovn_controller[157794]: 2025-10-05T10:10:12Z|00378|binding|INFO|Claiming lport d715c47f-f314-4023-8321-8d33e8ca4232 for this chassis. Oct 5 06:10:12 localhost ovn_controller[157794]: 2025-10-05T10:10:12Z|00379|binding|INFO|d715c47f-f314-4023-8321-8d33e8ca4232: Claiming unknown Oct 5 06:10:12 localhost nova_compute[297021]: 2025-10-05 10:10:12.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:12 localhost systemd-udevd[337962]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:10:12 localhost NetworkManager[5981]: [1759659012.7142] manager: (tapd715c47f-f3): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Oct 5 06:10:12 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:10:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:12.730 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.243/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-663e51ad-22e0-4837-9a4a-a66048f1a553', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-663e51ad-22e0-4837-9a4a-a66048f1a553', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36bd7039b7af4b7db5db22e101f63a40', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e92452c-114f-4a4c-855e-edd5381e9fad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d715c47f-f314-4023-8321-8d33e8ca4232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:12.732 163434 INFO neutron.agent.ovn.metadata.agent [-] Port d715c47f-f314-4023-8321-8d33e8ca4232 in datapath 663e51ad-22e0-4837-9a4a-a66048f1a553 bound to our chassis#033[00m Oct 5 06:10:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:12.735 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port 480cbdb0-ba6d-4fc4-982e-1e00c0546450 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:10:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:12.735 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 663e51ad-22e0-4837-9a4a-a66048f1a553, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:12.737 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8b6dc8-9108-4947-94da-f004af5ffc59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost ovn_controller[157794]: 2025-10-05T10:10:12Z|00380|binding|INFO|Setting lport d715c47f-f314-4023-8321-8d33e8ca4232 ovn-installed in OVS Oct 5 06:10:12 localhost ovn_controller[157794]: 2025-10-05T10:10:12Z|00381|binding|INFO|Setting lport d715c47f-f314-4023-8321-8d33e8ca4232 up in Southbound Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost nova_compute[297021]: 2025-10-05 10:10:12.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost journal[237931]: ethtool ioctl error on tapd715c47f-f3: No such device Oct 5 06:10:12 localhost nova_compute[297021]: 2025-10-05 10:10:12.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:12 localhost nova_compute[297021]: 2025-10-05 10:10:12.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e209 do_prune osdmap full prune enabled Oct 5 06:10:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e210 e210: 6 total, 6 up, 6 in Oct 5 06:10:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e210: 6 total, 6 up, 6 in Oct 5 06:10:13 localhost podman[338033]: Oct 5 06:10:13 localhost podman[338033]: 2025-10-05 10:10:13.805748588 +0000 UTC m=+0.090762168 container create 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:10:13 localhost systemd[1]: Started libpod-conmon-30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac.scope. Oct 5 06:10:13 localhost podman[338033]: 2025-10-05 10:10:13.762495887 +0000 UTC m=+0.047509547 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:10:13 localhost systemd[1]: tmp-crun.Lm5XwU.mount: Deactivated successfully. Oct 5 06:10:13 localhost systemd[1]: Started libcrun container. Oct 5 06:10:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930d18c74ab1232690e19d593910aa840d9444c78a7c8ec6b6a1bed8d8bf51dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:10:13 localhost podman[338033]: 2025-10-05 10:10:13.888220932 +0000 UTC m=+0.173234522 container init 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:10:13 localhost podman[338033]: 2025-10-05 10:10:13.897205703 +0000 UTC m=+0.182219283 container start 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:10:13 localhost dnsmasq[338051]: started, version 2.85 cachesize 150 Oct 5 06:10:13 localhost dnsmasq[338051]: DNS service limited to local subnets Oct 5 06:10:13 localhost dnsmasq[338051]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:10:13 localhost dnsmasq[338051]: warning: no upstream servers configured Oct 5 06:10:13 localhost dnsmasq-dhcp[338051]: DHCP, static leases only on 10.100.255.240, lease time 1d Oct 5 06:10:13 localhost dnsmasq[338051]: read /var/lib/neutron/dhcp/663e51ad-22e0-4837-9a4a-a66048f1a553/addn_hosts - 0 addresses Oct 5 06:10:13 localhost dnsmasq-dhcp[338051]: read /var/lib/neutron/dhcp/663e51ad-22e0-4837-9a4a-a66048f1a553/host Oct 5 06:10:13 localhost dnsmasq-dhcp[338051]: read /var/lib/neutron/dhcp/663e51ad-22e0-4837-9a4a-a66048f1a553/opts Oct 5 06:10:14 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:14.063 272040 INFO neutron.agent.dhcp.agent [None req-27143e42-2b3e-49b2-8eb2-06fa191730de - - - - - -] DHCP configuration for ports {'430ee941-a105-487b-a1e0-4657f3d1211c'} is completed#033[00m Oct 5 06:10:14 localhost nova_compute[297021]: 2025-10-05 10:10:14.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:14 localhost nova_compute[297021]: 2025-10-05 10:10:14.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:14 localhost nova_compute[297021]: 2025-10-05 10:10:14.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:14 localhost nova_compute[297021]: 2025-10-05 10:10:14.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:10:14 localhost systemd[1]: tmp-crun.kk3niI.mount: Deactivated successfully. Oct 5 06:10:14 localhost podman[338052]: 2025-10-05 10:10:14.920512318 +0000 UTC m=+0.083875463 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git) Oct 5 06:10:14 localhost podman[338052]: 2025-10-05 10:10:14.937805052 +0000 UTC m=+0.101168167 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:10:14 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:10:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e210 do_prune osdmap full prune enabled Oct 5 06:10:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e211 e211: 6 total, 6 up, 6 in Oct 5 06:10:15 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e211: 6 total, 6 up, 6 in Oct 5 06:10:15 localhost nova_compute[297021]: 2025-10-05 10:10:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e211 do_prune osdmap full prune enabled Oct 5 06:10:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e212 e212: 6 total, 6 up, 6 in Oct 5 06:10:16 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e212: 6 total, 6 up, 6 in Oct 5 06:10:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1721351698' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:16 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1721351698' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:17 localhost nova_compute[297021]: 2025-10-05 10:10:17.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:18 localhost nova_compute[297021]: 2025-10-05 10:10:18.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:10:18 localhost podman[338071]: 2025-10-05 10:10:18.671529097 +0000 UTC m=+0.083214795 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:10:18 localhost podman[338071]: 2025-10-05 10:10:18.709919198 +0000 UTC m=+0.121604916 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:10:18 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:10:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e212 do_prune osdmap full prune enabled Oct 5 06:10:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e213 e213: 6 total, 6 up, 6 in Oct 5 06:10:19 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e213: 6 total, 6 up, 6 in Oct 5 06:10:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e213 do_prune osdmap full prune enabled Oct 5 06:10:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e214 e214: 6 total, 6 up, 6 in Oct 5 06:10:20 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e214: 6 total, 6 up, 6 in Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:20.471 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:10:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:20.472 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:10:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:20.472 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5032 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:20 localhost nova_compute[297021]: 2025-10-05 10:10:20.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e214 do_prune osdmap full prune enabled Oct 5 06:10:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e215 e215: 6 total, 6 up, 6 in Oct 5 06:10:21 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e215: 6 total, 6 up, 6 in Oct 5 06:10:21 localhost ovn_controller[157794]: 2025-10-05T10:10:21Z|00382|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:10:21 localhost nova_compute[297021]: 2025-10-05 10:10:21.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:21 localhost nova_compute[297021]: 2025-10-05 10:10:21.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:21 localhost podman[248506]: time="2025-10-05T10:10:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:10:21 localhost podman[248506]: @ - - [05/Oct/2025:10:10:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149330 "" "Go-http-client/1.1" Oct 5 06:10:21 localhost podman[248506]: @ - - [05/Oct/2025:10:10:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20342 "" "Go-http-client/1.1" Oct 5 06:10:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e215 do_prune osdmap full prune enabled Oct 5 06:10:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e216 e216: 6 total, 6 up, 6 in Oct 5 06:10:21 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e216: 6 total, 6 up, 6 in Oct 5 06:10:22 localhost openstack_network_exporter[250601]: ERROR 10:10:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:22 localhost openstack_network_exporter[250601]: ERROR 10:10:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:10:22 localhost openstack_network_exporter[250601]: ERROR 10:10:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:22 localhost openstack_network_exporter[250601]: ERROR 10:10:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:10:22 localhost openstack_network_exporter[250601]: Oct 5 06:10:22 localhost openstack_network_exporter[250601]: ERROR 10:10:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:10:22 localhost openstack_network_exporter[250601]: Oct 5 06:10:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:22.476 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:22 localhost nova_compute[297021]: 2025-10-05 10:10:22.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:22 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:22.480 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:10:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:10:22 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2860620096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:10:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e216 do_prune osdmap full prune enabled Oct 5 06:10:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e217 e217: 6 total, 6 up, 6 in Oct 5 06:10:23 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e217: 6 total, 6 up, 6 in Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.515 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.515 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.515 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.516 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.516 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:10:23 localhost podman[338097]: 2025-10-05 10:10:23.677226773 +0000 UTC m=+0.083198624 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:10:23 localhost podman[338097]: 2025-10-05 10:10:23.68602301 +0000 UTC m=+0.091994811 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:10:23 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:10:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:10:23 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3672742157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:10:23 localhost nova_compute[297021]: 2025-10-05 10:10:23.979 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.053 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.053 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:10:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e217 do_prune osdmap full prune enabled Oct 5 06:10:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e218 e218: 6 total, 6 up, 6 in Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.272 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.274 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11103MB free_disk=41.700164794921875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.275 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:10:24 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e218: 6 total, 6 up, 6 in Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.277 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.361 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.362 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.363 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.413 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:10:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:10:24 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2209801952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.904 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.911 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.929 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.931 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:10:24 localhost nova_compute[297021]: 2025-10-05 10:10:24.932 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:10:25 localhost nova_compute[297021]: 2025-10-05 10:10:25.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e218 do_prune osdmap full prune enabled Oct 5 06:10:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e219 e219: 6 total, 6 up, 6 in Oct 5 06:10:26 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e219: 6 total, 6 up, 6 in Oct 5 06:10:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e219 do_prune osdmap full prune enabled Oct 5 06:10:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e220 e220: 6 total, 6 up, 6 in Oct 5 06:10:26 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e220: 6 total, 6 up, 6 in Oct 5 06:10:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:10:27 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3915402875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:10:27 localhost nova_compute[297021]: 2025-10-05 10:10:27.933 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:27 localhost nova_compute[297021]: 2025-10-05 10:10:27.934 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:10:27 localhost nova_compute[297021]: 2025-10-05 10:10:27.934 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.003 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.004 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.004 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.005 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:10:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e220 do_prune osdmap full prune enabled Oct 5 06:10:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e221 e221: 6 total, 6 up, 6 in Oct 5 06:10:28 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e221: 6 total, 6 up, 6 in Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.531 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.554 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:10:28 localhost nova_compute[297021]: 2025-10-05 10:10:28.554 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:10:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e221 do_prune osdmap full prune enabled Oct 5 06:10:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e222 e222: 6 total, 6 up, 6 in Oct 5 06:10:29 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e222: 6 total, 6 up, 6 in Oct 5 06:10:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:10:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:10:29 localhost systemd[1]: tmp-crun.zos1hk.mount: Deactivated successfully. Oct 5 06:10:29 localhost podman[338161]: 2025-10-05 10:10:29.67302794 +0000 UTC m=+0.081024236 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:10:29 localhost podman[338161]: 2025-10-05 10:10:29.682668039 +0000 UTC m=+0.090664335 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:10:29 localhost podman[338162]: 2025-10-05 10:10:29.698210016 +0000 UTC m=+0.097444527 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:10:29 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:10:29 localhost podman[338162]: 2025-10-05 10:10:29.708315537 +0000 UTC m=+0.107550048 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:29 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:10:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:29.750 272040 INFO neutron.agent.dhcp.agent [None req-07c1ae0e-38ae-4e8e-b3ba-75b8002a0663 - - - - - -] Synchronizing state#033[00m Oct 5 06:10:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:29.886 272040 INFO neutron.agent.dhcp.agent [None req-618c6529-80fc-43fc-845e-d7ec0223f75c - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:10:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:29.887 272040 INFO neutron.agent.dhcp.agent [-] Starting network e3aa3135-882c-44b6-ac0a-f748039b4e28 dhcp configuration#033[00m Oct 5 06:10:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:29.888 272040 INFO neutron.agent.dhcp.agent [-] Finished network e3aa3135-882c-44b6-ac0a-f748039b4e28 dhcp configuration#033[00m Oct 5 06:10:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:29.888 272040 INFO neutron.agent.dhcp.agent [None req-618c6529-80fc-43fc-845e-d7ec0223f75c - - - - - -] Synchronizing state complete#033[00m Oct 5 06:10:29 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:29.889 272040 INFO neutron.agent.dhcp.agent [None req-f3788a25-49ec-4dda-a6b8-72e0f34fb709 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:30 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:30.328 272040 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:30 localhost nova_compute[297021]: 2025-10-05 10:10:30.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:30 localhost nova_compute[297021]: 2025-10-05 10:10:30.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:30 localhost nova_compute[297021]: 2025-10-05 10:10:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:10:30 localhost nova_compute[297021]: 2025-10-05 10:10:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:30 localhost nova_compute[297021]: 2025-10-05 10:10:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:30 localhost nova_compute[297021]: 2025-10-05 10:10:30.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e222 do_prune osdmap full prune enabled Oct 5 06:10:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e223 e223: 6 total, 6 up, 6 in Oct 5 06:10:31 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e223: 6 total, 6 up, 6 in Oct 5 06:10:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e223 do_prune osdmap full prune enabled Oct 5 06:10:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e224 e224: 6 total, 6 up, 6 in Oct 5 06:10:31 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e224: 6 total, 6 up, 6 in Oct 5 06:10:32 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:32.483 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:10:32 localhost neutron_sriov_agent[264984]: 2025-10-05 10:10:32.742 2 INFO neutron.agent.securitygroups_rpc [req-c108df62-5349-477f-8022-dc93d2a68330 req-724fb1dc-04d4-4b1f-a5ff-0a795b0c07df f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group member updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:10:32 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 1 addresses Oct 5 06:10:32 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:10:32 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:10:32 localhost podman[338218]: 2025-10-05 10:10:32.982526635 +0000 UTC m=+0.061085561 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:10:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e224 do_prune osdmap full prune enabled Oct 5 06:10:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e225 e225: 6 total, 6 up, 6 in Oct 5 06:10:33 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e225: 6 total, 6 up, 6 in Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e225 do_prune osdmap full prune enabled Oct 5 06:10:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e226 e226: 6 total, 6 up, 6 in Oct 5 06:10:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e226: 6 total, 6 up, 6 in Oct 5 06:10:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:10:35 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e226 do_prune osdmap full prune enabled Oct 5 06:10:35 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:10:35 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e227 e227: 6 total, 6 up, 6 in Oct 5 06:10:35 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e227: 6 total, 6 up, 6 in Oct 5 06:10:35 localhost nova_compute[297021]: 2025-10-05 10:10:35.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e227 do_prune osdmap full prune enabled Oct 5 06:10:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e228 e228: 6 total, 6 up, 6 in Oct 5 06:10:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e228: 6 total, 6 up, 6 in Oct 5 06:10:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:10:36 localhost podman[338380]: 2025-10-05 10:10:36.682546326 +0000 UTC m=+0.087485250 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:10:36 localhost podman[338380]: 2025-10-05 10:10:36.694095546 +0000 UTC m=+0.099034470 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 06:10:36 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:10:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e228 do_prune osdmap full prune enabled Oct 5 06:10:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e229 e229: 6 total, 6 up, 6 in Oct 5 06:10:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e229: 6 total, 6 up, 6 in Oct 5 06:10:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:10:36 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:37 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3503794261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:37 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3503794261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:37 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:38 localhost dnsmasq[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/addn_hosts - 0 addresses Oct 5 06:10:38 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/host Oct 5 06:10:38 localhost dnsmasq-dhcp[336221]: read /var/lib/neutron/dhcp/041cfe8f-6406-4960-bbb2-faeb2bcfb0e5/opts Oct 5 06:10:38 localhost podman[338414]: 2025-10-05 10:10:38.342385341 +0000 UTC m=+0.064288927 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:10:38 localhost nova_compute[297021]: 2025-10-05 10:10:38.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:38 localhost kernel: device tap51ef4aa5-49 left promiscuous mode Oct 5 06:10:38 localhost ovn_controller[157794]: 2025-10-05T10:10:38Z|00383|binding|INFO|Releasing lport 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 from this chassis (sb_readonly=0) Oct 5 06:10:38 localhost ovn_controller[157794]: 2025-10-05T10:10:38Z|00384|binding|INFO|Setting lport 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 down in Southbound Oct 5 06:10:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:38.531 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c8daf35e79847329bde1c6cf0340477', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9e6832-0d64-4de3-813f-a8d270389d12, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=51ef4aa5-49b0-4c06-8223-999b05c8fcc3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:38.533 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 51ef4aa5-49b0-4c06-8223-999b05c8fcc3 in datapath 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5 unbound from our chassis#033[00m Oct 5 06:10:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:38.535 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:10:38 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:38.536 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddd22c4-8c98-4217-9793-a4f42b40ce77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:38 localhost nova_compute[297021]: 2025-10-05 10:10:38.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.840 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.841 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.846 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1008ca2-719f-4acd-8a5b-d48990d8eeb1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.841338', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8ade1af6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '9c37ef863ca46c1fbdda9e7db7ea78adeab0f9032806b207ec252433e81ad8f1'}]}, 'timestamp': '2025-10-05 10:10:38.847151', '_unique_id': 'fc613a7dcc4549c585fbd65a8f0588c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.848 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.850 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.850 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f7cfcb7d-0182-489b-ae2e-8d1fba8b595b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.850190', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8adea9b2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '2612a15df4621f96f4dac40bd8cacc232fdbbdeda9438ee22861bc4d72c62e82'}]}, 'timestamp': '2025-10-05 10:10:38.850748', '_unique_id': 'e2b2de44abc14306a17b67b9dd026c80'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.851 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.852 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a6117514-baa8-4879-a003-402d4f6ac598', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.853034', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8ae2572e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '5345c1e9a9460f1475c986bb8afff1c635fde07bded37b0816ae236f834859f1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.853034', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8ae26cc8-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '2e470b1692edddc3940f2f4aa96baa1d2dff39d9a43b9ea8c4f1932d3f4d8536'}]}, 'timestamp': '2025-10-05 10:10:38.875508', '_unique_id': 'd7b13801ab1c45b1ac7ab04b68aa8bac'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a1965b77-4430-4919-94f3-c192f2d42aea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.877942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8ae48f08-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.101859643, 'message_signature': '02141e6282dbf53fc57f638b140b396ab1812ec3eff2d79dc4de0f61b808e390'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.877942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8ae4a2cc-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.101859643, 'message_signature': '581137727f28d9551d1c0ccf9b27af682469ce5f9b9afec3c3b6494ae67a1056'}]}, 'timestamp': '2025-10-05 10:10:38.889893', '_unique_id': '834c218ace44400ba3a217ab74695104'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.892 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '75fface3-2d95-4119-9e81-20ada59b836e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.892255', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8ae513a6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '6aba5fea58041edfbe713f4799cf017498145d68bae5ce5513cfe6316b8e883f'}]}, 'timestamp': '2025-10-05 10:10:38.892772', '_unique_id': '8a5e542546074260b0fda63b211a83f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.894 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5eb73970-fc4c-4370-b0d8-cec5955be4fc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.895031', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8ae57ec2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '42fdcf1e7cc909fc9cfca3ac720c4c69302b8b2159760bb70b254aec7b4af4e2'}]}, 'timestamp': '2025-10-05 10:10:38.895607', '_unique_id': 'f8fde492535b414fbc3f56251dab838a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.897 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.913 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7cbbe471-ea6f-47e8-82ac-4e7692bfbaab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:10:38.897809', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '8ae87a5a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.137790347, 'message_signature': '12db8f60d490d4922eb94e32ae95280f2ce4833720781b930e4a1774015d9efc'}]}, 'timestamp': '2025-10-05 10:10:38.915055', '_unique_id': 'b2dba27f8bbb431db7f55986c7da89b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.917 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.917 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '73ef052f-d825-4241-b7bf-1946b0835fbb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.917478', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8ae8eb98-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '254ffc79e86d75b3328d8cc80bdbd3c0c3e481d7cddd6081265bc411456866ee'}]}, 'timestamp': '2025-10-05 10:10:38.918059', '_unique_id': '36c46855ef894c228a0124e6363a758d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.920 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd62c0f40-eea5-49f4-8a82-82c4e4c5b27a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.920351', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8ae95c4a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '6a43cea3d65d5798c1d10326e95facb77604b17e1575082a5562c4a10b7b779c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.920351', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8ae96cb2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '8254e619076166eec8d41796a7133c6743bfdc23305164bb350003605a1d8465'}]}, 'timestamp': '2025-10-05 10:10:38.921234', '_unique_id': 'a8ef923f02b347f2871cca0b03b2a4c0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.922 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.923 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c43803f3-9a4e-44b7-9d55-e23192314f97', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.923448', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8ae9d422-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '19121aa04d0c9a5c7ebd29db38ab0015c7a7eec3b801169cfa7b5745c1a8c0b9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.923448', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8ae9e458-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '29ff313769ce1f4df18a2494eebce686ec9eac6ff9db70efe6cf9e1747e183d7'}]}, 'timestamp': '2025-10-05 10:10:38.924298', '_unique_id': '9a72781c47d542dc80161e64db83eb08'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.926 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd6c55291-def2-49f5-ae4c-d0d9f0b938c6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.926535', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8aea4c86-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.101859643, 'message_signature': '2753577193baa72a1f02fc071bf91c041490f5a3fa6a25e2f07d1593616f184b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.926535', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8aea5fd2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.101859643, 'message_signature': '15f934ba21874d9e2440bc2f8af28df91b9bc2663cd6dac645194a21b4a0f87e'}]}, 'timestamp': '2025-10-05 10:10:38.927491', '_unique_id': '29b7723d37e74c79bc4712c338135fed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.929 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.930 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67729947-19b5-4b19-8cab-2db0def47abe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.929747', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8aeacae4-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '0c6aa1e52c04be76098393ffd2dd1c7a3afea783b016b98c00bfa290aecbdb56'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.929747', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8aeadc1e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': 'd7ae514ae70396690ac6b89cbb6518dd445679004d5adf0b01046f87205a16de'}]}, 'timestamp': '2025-10-05 10:10:38.930643', '_unique_id': '07fcf4576fb5446db59c9f7950c41540'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.931 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.932 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.932 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.932 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.933 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf1d6af9-beb8-42d6-b0fa-fc4c4a353b10', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.932910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8aeb45b4-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '5cf3508aadd339fb244696ddf58b6ed06faf8571c4ee93bd3968cdd27dcfcdc1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.932910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8aeb575c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': 'e5d82799c01545dc6ad75a66b8637a8c4680bcf64913331990747ccaeb790423'}]}, 'timestamp': '2025-10-05 10:10:38.933815', '_unique_id': '6ab8817c548743bd909cf1a35400bcf3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.936 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.936 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a800977c-0654-46b9-a0ac-ebd88d045ebd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.936378', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8aebcfd4-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': 'ac2d480dfc885133a299fd277ae7661f14e00136f35c2212da8af401f3e3b2f0'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.936378', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8aebe0e6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.076950144, 'message_signature': '97a60ebbbfb9b07cc3e7b3e5040434f728621083857cd63706551918ed038a4c'}]}, 'timestamp': '2025-10-05 10:10:38.937317', '_unique_id': '958c02ef96d34dcb83a5448969ac5121'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.939 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.939 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.939 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1371cb6a-6223-4e00-98aa-1e314204b69c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.939657', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8aec4d6a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '386c2354e719b5dc2e10eee7ce3ac1fa438cbf9cb1a9fe11d688e5a5c55d4383'}]}, 'timestamp': '2025-10-05 10:10:38.940122', '_unique_id': 'b1ecc132e1264c118aabf7b90b957666'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.942 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4a58811d-8af0-49a0-a232-0ccf6f19e095', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:10:38.942291', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8aecb548-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.101859643, 'message_signature': '0a97b1fe3613f6c08b1b6d931a6778fd800db74d44db942040fea7eea905d3dc'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:10:38.942291', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8aecc56a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.101859643, 'message_signature': '4f95a97d1b9135bd834b09feb3a902d1eec83a89b998567d2d0ee48f3379ee72'}]}, 'timestamp': '2025-10-05 10:10:38.943165', '_unique_id': '1c89fb794d2a4cc2a4d483a61833db16'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 18140000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b800e6d8-d05e-43c5-bf65-f612e9daad34', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18140000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:10:38.945321', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '8aed2bd6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.137790347, 'message_signature': '1f0438df615ff91d46add3e28c41dd07dba431f051ce51534d06f6a2a295cf81'}]}, 'timestamp': '2025-10-05 10:10:38.945825', '_unique_id': 'be6a80525515494aa6675eb04022014d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4cd47c2c-7908-4436-b094-55cad3b2eb61', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.947899', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8aed8ef0-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '1eab2d1a61e4c8376af5752749ad4e39e39f9d01bf3b79b1d7a6beec6f07f592'}]}, 'timestamp': '2025-10-05 10:10:38.948351', '_unique_id': '5f98e0568b5145bea2ea2c83824c4262'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.949 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.950 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e07fb83f-c811-45c4-8bde-906ae5e410f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.950586', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8aedf4b2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '3253e248b5ad4011008665725d294a3dedbaefa716f165786950c8b5ca034cea'}]}, 'timestamp': '2025-10-05 10:10:38.950870', '_unique_id': 'e8434f92cbef4c5ab2534a906b951def'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.951 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.952 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aa166806-a573-4ece-a387-94633f68230b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.952241', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8aee353a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '4706d03c028c209c1f1d2641e94e82bd471847f8ab32c585d820dfbd0fb9bb01'}]}, 'timestamp': '2025-10-05 10:10:38.952543', '_unique_id': '0dbac926679141deac63af035a766694'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.953 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c195912a-ec12-4956-afdd-268a7a1c9ee4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:10:38.953838', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '8aee73c4-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12518.06527555, 'message_signature': '9ca82ea7d2f31e3cc0645b13cbff48ecd530287c40d3937610adad2c20a1705e'}]}, 'timestamp': '2025-10-05 10:10:38.954123', '_unique_id': '5d633e926e924df7ab152768422ac6d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:10:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:10:38.954 12 ERROR oslo_messaging.notify.messaging Oct 5 06:10:39 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:39.720 272040 INFO neutron.agent.linux.ip_lib [None req-39955f9e-d885-4d37-8362-fddf822b02b1 - - - - - -] Device tap8be2a294-55 cannot be used as it has no MAC address#033[00m Oct 5 06:10:39 localhost nova_compute[297021]: 2025-10-05 10:10:39.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:39 localhost kernel: device tap8be2a294-55 entered promiscuous mode Oct 5 06:10:39 localhost nova_compute[297021]: 2025-10-05 10:10:39.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:39 localhost ovn_controller[157794]: 2025-10-05T10:10:39Z|00385|binding|INFO|Claiming lport 8be2a294-5567-43f0-a322-2035ca1f5beb for this chassis. Oct 5 06:10:39 localhost ovn_controller[157794]: 2025-10-05T10:10:39Z|00386|binding|INFO|8be2a294-5567-43f0-a322-2035ca1f5beb: Claiming unknown Oct 5 06:10:39 localhost NetworkManager[5981]: [1759659039.7571] manager: (tap8be2a294-55): new Generic device (/org/freedesktop/NetworkManager/Devices/61) Oct 5 06:10:39 localhost systemd-udevd[338446]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:10:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:39.764 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-1fa972f4-9566-4419-ac42-3d10a08ddc16', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fa972f4-9566-4419-ac42-3d10a08ddc16', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36bd7039b7af4b7db5db22e101f63a40', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8acb3600-14c9-4ded-a4d8-0bda56585adb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8be2a294-5567-43f0-a322-2035ca1f5beb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:39.766 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 8be2a294-5567-43f0-a322-2035ca1f5beb in datapath 1fa972f4-9566-4419-ac42-3d10a08ddc16 bound to our chassis#033[00m Oct 5 06:10:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:39.768 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1fa972f4-9566-4419-ac42-3d10a08ddc16 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:10:39 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:39.768 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b70521b6-9728-4f3b-ae55-f87be9d744d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost ovn_controller[157794]: 2025-10-05T10:10:39Z|00387|binding|INFO|Setting lport 8be2a294-5567-43f0-a322-2035ca1f5beb ovn-installed in OVS Oct 5 06:10:39 localhost ovn_controller[157794]: 2025-10-05T10:10:39Z|00388|binding|INFO|Setting lport 8be2a294-5567-43f0-a322-2035ca1f5beb up in Southbound Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost nova_compute[297021]: 2025-10-05 10:10:39.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost journal[237931]: ethtool ioctl error on tap8be2a294-55: No such device Oct 5 06:10:39 localhost nova_compute[297021]: 2025-10-05 10:10:39.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:39 localhost nova_compute[297021]: 2025-10-05 10:10:39.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:39 localhost podman[338449]: 2025-10-05 10:10:39.897341219 +0000 UTC m=+0.108375560 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001) Oct 5 06:10:39 localhost podman[338449]: 2025-10-05 10:10:39.939901292 +0000 UTC m=+0.150935653 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:10:39 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:10:40 localhost ovn_controller[157794]: 2025-10-05T10:10:40Z|00389|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:10:40 localhost nova_compute[297021]: 2025-10-05 10:10:40.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:40 localhost podman[338542]: Oct 5 06:10:40 localhost podman[338542]: 2025-10-05 10:10:40.747227848 +0000 UTC m=+0.088933449 container create 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:10:40 localhost systemd[1]: Started libpod-conmon-1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8.scope. Oct 5 06:10:40 localhost systemd[1]: Started libcrun container. Oct 5 06:10:40 localhost podman[338542]: 2025-10-05 10:10:40.702559399 +0000 UTC m=+0.044265030 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:10:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee8a61d6adbd54a7544f9a19857fedb0606eeb7684a52eca06323b24f17c5ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:10:40 localhost podman[338542]: 2025-10-05 10:10:40.816367494 +0000 UTC m=+0.158073095 container init 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:40 localhost podman[338542]: 2025-10-05 10:10:40.826597639 +0000 UTC m=+0.168303290 container start 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:10:40 localhost nova_compute[297021]: 2025-10-05 10:10:40.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:40 localhost dnsmasq[338560]: started, version 2.85 cachesize 150 Oct 5 06:10:40 localhost dnsmasq[338560]: DNS service limited to local subnets Oct 5 06:10:40 localhost dnsmasq[338560]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:10:40 localhost dnsmasq[338560]: warning: no upstream servers configured Oct 5 06:10:40 localhost dnsmasq-dhcp[338560]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:10:40 localhost dnsmasq[338560]: read /var/lib/neutron/dhcp/1fa972f4-9566-4419-ac42-3d10a08ddc16/addn_hosts - 0 addresses Oct 5 06:10:40 localhost dnsmasq-dhcp[338560]: read /var/lib/neutron/dhcp/1fa972f4-9566-4419-ac42-3d10a08ddc16/host Oct 5 06:10:40 localhost dnsmasq-dhcp[338560]: read /var/lib/neutron/dhcp/1fa972f4-9566-4419-ac42-3d10a08ddc16/opts Oct 5 06:10:40 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:40.973 272040 INFO neutron.agent.dhcp.agent [None req-b3f70d7d-9faf-40fe-8152-337a789a4b7e - - - - - -] DHCP configuration for ports {'56fcc23e-bb6c-447d-85d6-aaaa2c3c3698'} is completed#033[00m Oct 5 06:10:41 localhost dnsmasq[336221]: exiting on receipt of SIGTERM Oct 5 06:10:41 localhost systemd[1]: libpod-5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5.scope: Deactivated successfully. Oct 5 06:10:41 localhost podman[338578]: 2025-10-05 10:10:41.051546778 +0000 UTC m=+0.055357197 container kill 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:10:41 localhost podman[338592]: 2025-10-05 10:10:41.123236443 +0000 UTC m=+0.057099114 container died 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:10:41 localhost podman[338592]: 2025-10-05 10:10:41.158679415 +0000 UTC m=+0.092542046 container cleanup 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:41 localhost systemd[1]: libpod-conmon-5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5.scope: Deactivated successfully. Oct 5 06:10:41 localhost podman[338599]: 2025-10-05 10:10:41.205640055 +0000 UTC m=+0.126139998 container remove 5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-041cfe8f-6406-4960-bbb2-faeb2bcfb0e5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:10:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:41.573 272040 INFO neutron.agent.dhcp.agent [None req-531dd62b-3b6b-4555-a48c-579a661571d2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:41.574 272040 INFO neutron.agent.dhcp.agent [None req-531dd62b-3b6b-4555-a48c-579a661571d2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:41 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:41.575 272040 INFO neutron.agent.dhcp.agent [None req-531dd62b-3b6b-4555-a48c-579a661571d2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:41 localhost systemd[1]: var-lib-containers-storage-overlay-cc69427e010beafa3544b0e3175c7cf9f78a61b7e81bfba29dbbdb6dbdbe89c0-merged.mount: Deactivated successfully. Oct 5 06:10:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a45b40c21f0f6acfbbd01a757abbf22a905f1b81cafb6fed43582e9781eebb5-userdata-shm.mount: Deactivated successfully. Oct 5 06:10:41 localhost systemd[1]: run-netns-qdhcp\x2d041cfe8f\x2d6406\x2d4960\x2dbbb2\x2dfaeb2bcfb0e5.mount: Deactivated successfully. Oct 5 06:10:41 localhost dnsmasq[338560]: read /var/lib/neutron/dhcp/1fa972f4-9566-4419-ac42-3d10a08ddc16/addn_hosts - 0 addresses Oct 5 06:10:41 localhost dnsmasq-dhcp[338560]: read /var/lib/neutron/dhcp/1fa972f4-9566-4419-ac42-3d10a08ddc16/host Oct 5 06:10:41 localhost dnsmasq-dhcp[338560]: read /var/lib/neutron/dhcp/1fa972f4-9566-4419-ac42-3d10a08ddc16/opts Oct 5 06:10:41 localhost podman[338639]: 2025-10-05 10:10:41.771055496 +0000 UTC m=+0.064150704 container kill 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:10:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e229 do_prune osdmap full prune enabled Oct 5 06:10:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e230 e230: 6 total, 6 up, 6 in Oct 5 06:10:41 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e230: 6 total, 6 up, 6 in Oct 5 06:10:41 localhost ovn_controller[157794]: 2025-10-05T10:10:41Z|00390|binding|INFO|Removing iface tap8be2a294-55 ovn-installed in OVS Oct 5 06:10:41 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:41.996 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6a855f7e-3d07-4f5f-bc79-0254b938b1b3 with type ""#033[00m Oct 5 06:10:41 localhost ovn_controller[157794]: 2025-10-05T10:10:41Z|00391|binding|INFO|Removing lport 8be2a294-5567-43f0-a322-2035ca1f5beb ovn-installed in OVS Oct 5 06:10:41 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:41.998 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-1fa972f4-9566-4419-ac42-3d10a08ddc16', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fa972f4-9566-4419-ac42-3d10a08ddc16', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36bd7039b7af4b7db5db22e101f63a40', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8acb3600-14c9-4ded-a4d8-0bda56585adb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8be2a294-5567-43f0-a322-2035ca1f5beb) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:42.000 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 8be2a294-5567-43f0-a322-2035ca1f5beb in datapath 1fa972f4-9566-4419-ac42-3d10a08ddc16 unbound from our chassis#033[00m Oct 5 06:10:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:42.002 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1fa972f4-9566-4419-ac42-3d10a08ddc16, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:10:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:42.016 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[99850453-e94d-44d1-b40b-96a8eacdf60b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:42 localhost nova_compute[297021]: 2025-10-05 10:10:42.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:42 localhost nova_compute[297021]: 2025-10-05 10:10:42.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.056 272040 INFO neutron.agent.dhcp.agent [None req-d8e73524-8b4c-4df4-bfd0-5ba2fdc9119a - - - - - -] DHCP configuration for ports {'56fcc23e-bb6c-447d-85d6-aaaa2c3c3698', '8be2a294-5567-43f0-a322-2035ca1f5beb'} is completed#033[00m Oct 5 06:10:42 localhost dnsmasq[338560]: exiting on receipt of SIGTERM Oct 5 06:10:42 localhost systemd[1]: libpod-1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8.scope: Deactivated successfully. Oct 5 06:10:42 localhost podman[338676]: 2025-10-05 10:10:42.151334346 +0000 UTC m=+0.057566427 container kill 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:10:42 localhost podman[338688]: 2025-10-05 10:10:42.220882214 +0000 UTC m=+0.053917119 container died 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:10:42 localhost podman[338688]: 2025-10-05 10:10:42.258061671 +0000 UTC m=+0.091096536 container cleanup 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:10:42 localhost systemd[1]: libpod-conmon-1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8.scope: Deactivated successfully. Oct 5 06:10:42 localhost podman[338690]: 2025-10-05 10:10:42.300787848 +0000 UTC m=+0.128622724 container remove 1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fa972f4-9566-4419-ac42-3d10a08ddc16, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:10:42 localhost nova_compute[297021]: 2025-10-05 10:10:42.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:42 localhost kernel: device tap8be2a294-55 left promiscuous mode Oct 5 06:10:42 localhost nova_compute[297021]: 2025-10-05 10:10:42.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.357 272040 INFO neutron.agent.dhcp.agent [None req-618c6529-80fc-43fc-845e-d7ec0223f75c - - - - - -] Synchronizing state#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.519 272040 INFO neutron.agent.dhcp.agent [None req-00c4e4e2-2c9a-4704-a4b9-992ea2ffa996 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.521 272040 INFO neutron.agent.dhcp.agent [-] Starting network 1fa972f4-9566-4419-ac42-3d10a08ddc16 dhcp configuration#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.521 272040 INFO neutron.agent.dhcp.agent [-] Finished network 1fa972f4-9566-4419-ac42-3d10a08ddc16 dhcp configuration#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.521 272040 INFO neutron.agent.dhcp.agent [None req-00c4e4e2-2c9a-4704-a4b9-992ea2ffa996 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:10:42 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:42.576 272040 INFO neutron.agent.dhcp.agent [None req-92eb4fdd-b4e7-490b-adcb-f6ce28685786 - - - - - -] DHCP configuration for ports {'56fcc23e-bb6c-447d-85d6-aaaa2c3c3698'} is completed#033[00m Oct 5 06:10:42 localhost ovn_controller[157794]: 2025-10-05T10:10:42Z|00392|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:10:42 localhost nova_compute[297021]: 2025-10-05 10:10:42.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:42 localhost systemd[1]: tmp-crun.NK0RRQ.mount: Deactivated successfully. Oct 5 06:10:42 localhost systemd[1]: var-lib-containers-storage-overlay-7ee8a61d6adbd54a7544f9a19857fedb0606eeb7684a52eca06323b24f17c5ef-merged.mount: Deactivated successfully. Oct 5 06:10:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d7f4b6c18382e104b224ee701e8a4ef05a3bf1a26b4712e9a578f102e2984b8-userdata-shm.mount: Deactivated successfully. Oct 5 06:10:42 localhost systemd[1]: run-netns-qdhcp\x2d1fa972f4\x2d9566\x2d4419\x2dac42\x2d3d10a08ddc16.mount: Deactivated successfully. Oct 5 06:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:10:42 localhost systemd[1]: tmp-crun.GFsPfF.mount: Deactivated successfully. Oct 5 06:10:42 localhost podman[338720]: 2025-10-05 10:10:42.880246426 +0000 UTC m=+0.091715084 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 06:10:42 localhost podman[338720]: 2025-10-05 10:10:42.895751003 +0000 UTC m=+0.107219651 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:10:42 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:10:45 localhost podman[338739]: 2025-10-05 10:10:45.703057085 +0000 UTC m=+0.103437618 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 06:10:45 localhost podman[338739]: 2025-10-05 10:10:45.717002689 +0000 UTC m=+0.117383232 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41) Oct 5 06:10:45 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:10:45 localhost nova_compute[297021]: 2025-10-05 10:10:45.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:10:49 localhost podman[338759]: 2025-10-05 10:10:49.684557212 +0000 UTC m=+0.085649650 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:10:49 localhost podman[338759]: 2025-10-05 10:10:49.697802218 +0000 UTC m=+0.098894666 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:10:49 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:10:50 localhost nova_compute[297021]: 2025-10-05 10:10:50.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:50 localhost nova_compute[297021]: 2025-10-05 10:10:50.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:10:50 localhost nova_compute[297021]: 2025-10-05 10:10:50.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:10:50 localhost nova_compute[297021]: 2025-10-05 10:10:50.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:50 localhost nova_compute[297021]: 2025-10-05 10:10:50.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:50 localhost nova_compute[297021]: 2025-10-05 10:10:50.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:10:51 localhost podman[248506]: time="2025-10-05T10:10:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:10:51 localhost podman[248506]: @ - - [05/Oct/2025:10:10:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147506 "" "Go-http-client/1.1" Oct 5 06:10:51 localhost podman[248506]: @ - - [05/Oct/2025:10:10:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19879 "" "Go-http-client/1.1" Oct 5 06:10:51 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Oct 5 06:10:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:52 localhost openstack_network_exporter[250601]: ERROR 10:10:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:52 localhost openstack_network_exporter[250601]: ERROR 10:10:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:52 localhost openstack_network_exporter[250601]: ERROR 10:10:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:10:52 localhost openstack_network_exporter[250601]: ERROR 10:10:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:10:52 localhost openstack_network_exporter[250601]: Oct 5 06:10:52 localhost openstack_network_exporter[250601]: ERROR 10:10:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:10:52 localhost openstack_network_exporter[250601]: Oct 5 06:10:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:52 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2522687304' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:52 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2522687304' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:53 localhost systemd[1]: tmp-crun.WwmnOR.mount: Deactivated successfully. Oct 5 06:10:53 localhost podman[338798]: 2025-10-05 10:10:53.749541061 +0000 UTC m=+0.067154534 container kill 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:10:53 localhost ovn_controller[157794]: 2025-10-05T10:10:53Z|00393|binding|INFO|Removing iface tapd715c47f-f3 ovn-installed in OVS Oct 5 06:10:53 localhost dnsmasq[338051]: exiting on receipt of SIGTERM Oct 5 06:10:53 localhost ovn_controller[157794]: 2025-10-05T10:10:53Z|00394|binding|INFO|Removing lport d715c47f-f314-4023-8321-8d33e8ca4232 ovn-installed in OVS Oct 5 06:10:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:53.757 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 480cbdb0-ba6d-4fc4-982e-1e00c0546450 with type ""#033[00m Oct 5 06:10:53 localhost nova_compute[297021]: 2025-10-05 10:10:53.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:10:53 localhost systemd[1]: libpod-30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac.scope: Deactivated successfully. Oct 5 06:10:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:53.764 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.243/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-663e51ad-22e0-4837-9a4a-a66048f1a553', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-663e51ad-22e0-4837-9a4a-a66048f1a553', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36bd7039b7af4b7db5db22e101f63a40', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e92452c-114f-4a4c-855e-edd5381e9fad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d715c47f-f314-4023-8321-8d33e8ca4232) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:53 localhost nova_compute[297021]: 2025-10-05 10:10:53.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:53.769 163434 INFO neutron.agent.ovn.metadata.agent [-] Port d715c47f-f314-4023-8321-8d33e8ca4232 in datapath 663e51ad-22e0-4837-9a4a-a66048f1a553 unbound from our chassis#033[00m Oct 5 06:10:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:53.772 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 663e51ad-22e0-4837-9a4a-a66048f1a553, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:10:53 localhost ovn_metadata_agent[163429]: 2025-10-05 10:10:53.773 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee63221-ea6f-4ff0-b503-7ac69df3f3e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:53 localhost podman[338811]: 2025-10-05 10:10:53.845697842 +0000 UTC m=+0.072138177 container died 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:10:53 localhost podman[338811]: 2025-10-05 10:10:53.884558295 +0000 UTC m=+0.110998580 container cleanup 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:10:53 localhost systemd[1]: libpod-conmon-30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac.scope: Deactivated successfully. Oct 5 06:10:53 localhost podman[338813]: 2025-10-05 10:10:53.924724215 +0000 UTC m=+0.144099921 container remove 30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-663e51ad-22e0-4837-9a4a-a66048f1a553, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:10:53 localhost nova_compute[297021]: 2025-10-05 10:10:53.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:53 localhost kernel: device tapd715c47f-f3 left promiscuous mode Oct 5 06:10:53 localhost nova_compute[297021]: 2025-10-05 10:10:53.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:53 localhost podman[338812]: 2025-10-05 10:10:53.979795602 +0000 UTC m=+0.202107307 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:10:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:53.987 272040 INFO neutron.agent.dhcp.agent [None req-98a0ddd0-810e-4699-bd89-126bc617f39f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:53 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:10:53.987 272040 INFO neutron.agent.dhcp.agent [None req-98a0ddd0-810e-4699-bd89-126bc617f39f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:53 localhost podman[338812]: 2025-10-05 10:10:53.99089506 +0000 UTC m=+0.213206765 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:10:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:53 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3291349473' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:53 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:53 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3291349473' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:54 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:10:54 localhost ovn_controller[157794]: 2025-10-05T10:10:54Z|00395|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:10:54 localhost nova_compute[297021]: 2025-10-05 10:10:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:54 localhost systemd[1]: var-lib-containers-storage-overlay-930d18c74ab1232690e19d593910aa840d9444c78a7c8ec6b6a1bed8d8bf51dd-merged.mount: Deactivated successfully. Oct 5 06:10:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30e742ebd03bf56272111a005944b902804626b73ae6f5cb4b5535dca567a6ac-userdata-shm.mount: Deactivated successfully. Oct 5 06:10:54 localhost systemd[1]: run-netns-qdhcp\x2d663e51ad\x2d22e0\x2d4837\x2d9a4a\x2da66048f1a553.mount: Deactivated successfully. Oct 5 06:10:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e230 do_prune osdmap full prune enabled Oct 5 06:10:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e231 e231: 6 total, 6 up, 6 in Oct 5 06:10:55 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e231: 6 total, 6 up, 6 in Oct 5 06:10:55 localhost nova_compute[297021]: 2025-10-05 10:10:55.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:11:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:11:00 localhost podman[338866]: 2025-10-05 10:11:00.688173813 +0000 UTC m=+0.096284016 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Oct 5 06:11:00 localhost podman[338866]: 2025-10-05 10:11:00.702897978 +0000 UTC m=+0.111008141 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid) Oct 5 06:11:00 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:11:00 localhost podman[338867]: 2025-10-05 10:11:00.779867345 +0000 UTC m=+0.184230638 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3) Oct 5 06:11:00 localhost podman[338867]: 2025-10-05 10:11:00.821929764 +0000 UTC m=+0.226293057 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:00 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:11:00 localhost nova_compute[297021]: 2025-10-05 10:11:00.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:00 localhost nova_compute[297021]: 2025-10-05 10:11:00.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:00 localhost nova_compute[297021]: 2025-10-05 10:11:00.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:00 localhost nova_compute[297021]: 2025-10-05 10:11:00.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:00 localhost nova_compute[297021]: 2025-10-05 10:11:00.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:00 localhost nova_compute[297021]: 2025-10-05 10:11:00.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:01 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:01.792 272040 INFO neutron.agent.linux.ip_lib [None req-9fdd85ec-861f-4378-b299-6ea8f77a87d5 - - - - - -] Device tap88d11f4d-8d cannot be used as it has no MAC address#033[00m Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost kernel: device tap88d11f4d-8d entered promiscuous mode Oct 5 06:11:01 localhost NetworkManager[5981]: [1759659061.8320] manager: (tap88d11f4d-8d): new Generic device (/org/freedesktop/NetworkManager/Devices/62) Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost ovn_controller[157794]: 2025-10-05T10:11:01Z|00396|binding|INFO|Claiming lport 88d11f4d-8d42-4618-8332-3a3366f3b665 for this chassis. Oct 5 06:11:01 localhost ovn_controller[157794]: 2025-10-05T10:11:01Z|00397|binding|INFO|88d11f4d-8d42-4618-8332-3a3366f3b665: Claiming unknown Oct 5 06:11:01 localhost systemd-udevd[338912]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:11:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:01.853 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-cd0ea47a-53ae-418e-8451-447d9332d71e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd0ea47a-53ae-418e-8451-447d9332d71e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96f9484619534895a7ead96d38119886', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fde5f049-05be-4e9b-b732-c5b9f1a47844, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=88d11f4d-8d42-4618-8332-3a3366f3b665) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:01.856 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 88d11f4d-8d42-4618-8332-3a3366f3b665 in datapath cd0ea47a-53ae-418e-8451-447d9332d71e bound to our chassis#033[00m Oct 5 06:11:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:01.858 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network cd0ea47a-53ae-418e-8451-447d9332d71e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:11:01 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:01.859 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[c16797a4-0061-44df-a867-2fef5e3d2c5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e231 do_prune osdmap full prune enabled Oct 5 06:11:01 localhost ovn_controller[157794]: 2025-10-05T10:11:01Z|00398|binding|INFO|Setting lport 88d11f4d-8d42-4618-8332-3a3366f3b665 ovn-installed in OVS Oct 5 06:11:01 localhost ovn_controller[157794]: 2025-10-05T10:11:01Z|00399|binding|INFO|Setting lport 88d11f4d-8d42-4618-8332-3a3366f3b665 up in Southbound Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e232 e232: 6 total, 6 up, 6 in Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e232: 6 total, 6 up, 6 in Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost journal[237931]: ethtool ioctl error on tap88d11f4d-8d: No such device Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost nova_compute[297021]: 2025-10-05 10:11:01.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:02 localhost podman[338983]: Oct 5 06:11:02 localhost podman[338983]: 2025-10-05 10:11:02.827758798 +0000 UTC m=+0.091189000 container create 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:02 localhost systemd[1]: Started libpod-conmon-3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148.scope. Oct 5 06:11:02 localhost podman[338983]: 2025-10-05 10:11:02.784816874 +0000 UTC m=+0.048247106 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:11:02 localhost systemd[1]: tmp-crun.G0kEom.mount: Deactivated successfully. Oct 5 06:11:02 localhost systemd[1]: Started libcrun container. Oct 5 06:11:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8c77fbb10347141de53f5be82a1ff81b7a504737dfaa33ad3bbb1f433d3479/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:11:02 localhost podman[338983]: 2025-10-05 10:11:02.924275939 +0000 UTC m=+0.187706151 container init 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:11:02 localhost podman[338983]: 2025-10-05 10:11:02.935565992 +0000 UTC m=+0.198996194 container start 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:02 localhost dnsmasq[339002]: started, version 2.85 cachesize 150 Oct 5 06:11:02 localhost dnsmasq[339002]: DNS service limited to local subnets Oct 5 06:11:02 localhost dnsmasq[339002]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:11:02 localhost dnsmasq[339002]: warning: no upstream servers configured Oct 5 06:11:02 localhost dnsmasq-dhcp[339002]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:11:02 localhost dnsmasq[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/addn_hosts - 0 addresses Oct 5 06:11:02 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/host Oct 5 06:11:02 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/opts Oct 5 06:11:03 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:03.145 272040 INFO neutron.agent.dhcp.agent [None req-0b12d3c0-cf09-4cf0-aaa6-042f17b43c1b - - - - - -] DHCP configuration for ports {'c194bd36-7e85-4a7c-81ce-b9f1a1dfff19'} is completed#033[00m Oct 5 06:11:04 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:05 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:05 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:06 localhost nova_compute[297021]: 2025-10-05 10:11:06.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 5 06:11:06 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:06 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:06 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:11:07 localhost podman[339003]: 2025-10-05 10:11:07.68544468 +0000 UTC m=+0.087467520 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:11:07 localhost podman[339003]: 2025-10-05 10:11:07.691251835 +0000 UTC m=+0.093274665 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:07 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:11:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:07 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:07 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:09 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:09 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:11:10 localhost podman[339021]: 2025-10-05 10:11:10.688788326 +0000 UTC m=+0.090833471 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:10.744 272040 INFO neutron.agent.linux.ip_lib [None req-7bbbd964-f371-463e-8b46-04da5339cec4 - - - - - -] Device tap24ee3b1e-9e cannot be used as it has no MAC address#033[00m Oct 5 06:11:10 localhost podman[339021]: 2025-10-05 10:11:10.755896007 +0000 UTC m=+0.157941122 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 06:11:10 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:11:10 localhost nova_compute[297021]: 2025-10-05 10:11:10.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:10 localhost kernel: device tap24ee3b1e-9e entered promiscuous mode Oct 5 06:11:10 localhost NetworkManager[5981]: [1759659070.7855] manager: (tap24ee3b1e-9e): new Generic device (/org/freedesktop/NetworkManager/Devices/63) Oct 5 06:11:10 localhost ovn_controller[157794]: 2025-10-05T10:11:10Z|00400|binding|INFO|Claiming lport 24ee3b1e-9e60-434f-9428-ec34159ea056 for this chassis. Oct 5 06:11:10 localhost ovn_controller[157794]: 2025-10-05T10:11:10Z|00401|binding|INFO|24ee3b1e-9e60-434f-9428-ec34159ea056: Claiming unknown Oct 5 06:11:10 localhost nova_compute[297021]: 2025-10-05 10:11:10.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:10 localhost systemd-udevd[339056]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:11:10 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:10.793 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:11:10Z, description=, device_id=d6fe07db-ad41-4208-a06c-26ac9533f19d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fa72c957-ea8c-40a3-b996-de17b7baf07c, ip_allocation=immediate, mac_address=fa:16:3e:0c:dc:e5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:11:00Z, description=, dns_domain=, id=cd0ea47a-53ae-418e-8451-447d9332d71e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1460843989, port_security_enabled=True, project_id=96f9484619534895a7ead96d38119886, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17836, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3079, status=ACTIVE, subnets=['21aa8e87-752e-4c10-a680-c0996e9dee24'], tags=[], tenant_id=96f9484619534895a7ead96d38119886, updated_at=2025-10-05T10:11:01Z, vlan_transparent=None, network_id=cd0ea47a-53ae-418e-8451-447d9332d71e, port_security_enabled=False, project_id=96f9484619534895a7ead96d38119886, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3123, status=DOWN, tags=[], tenant_id=96f9484619534895a7ead96d38119886, updated_at=2025-10-05T10:11:10Z on network cd0ea47a-53ae-418e-8451-447d9332d71e#033[00m Oct 5 06:11:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:10.799 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-ca7ee42f-9f55-45fd-9b2f-9992b734b00d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca7ee42f-9f55-45fd-9b2f-9992b734b00d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96f9484619534895a7ead96d38119886', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a44e341-ad1d-47da-8e38-02bdb3f2eab4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=24ee3b1e-9e60-434f-9428-ec34159ea056) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:10.800 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 24ee3b1e-9e60-434f-9428-ec34159ea056 in datapath ca7ee42f-9f55-45fd-9b2f-9992b734b00d bound to our chassis#033[00m Oct 5 06:11:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:10.801 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Port a4fb8407-8950-4a9b-b978-49c3adcaa2b8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:11:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:10.801 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca7ee42f-9f55-45fd-9b2f-9992b734b00d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:11:10 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:10.803 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[73c47514-a647-4cf0-a7f5-aa617e2001b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:11:10 localhost ovn_controller[157794]: 2025-10-05T10:11:10Z|00402|binding|INFO|Setting lport 24ee3b1e-9e60-434f-9428-ec34159ea056 ovn-installed in OVS Oct 5 06:11:10 localhost ovn_controller[157794]: 2025-10-05T10:11:10Z|00403|binding|INFO|Setting lport 24ee3b1e-9e60-434f-9428-ec34159ea056 up in Southbound Oct 5 06:11:10 localhost nova_compute[297021]: 2025-10-05 10:11:10.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:10 localhost nova_compute[297021]: 2025-10-05 10:11:10.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:10 localhost nova_compute[297021]: 2025-10-05 10:11:10.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:10 localhost nova_compute[297021]: 2025-10-05 10:11:10.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:11 localhost nova_compute[297021]: 2025-10-05 10:11:11.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:11 localhost dnsmasq[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/addn_hosts - 1 addresses Oct 5 06:11:11 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/host Oct 5 06:11:11 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/opts Oct 5 06:11:11 localhost podman[339085]: 2025-10-05 10:11:11.062365605 +0000 UTC m=+0.077791730 container kill 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:11:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:11.425 272040 INFO neutron.agent.dhcp.agent [None req-9637be0c-c6f9-4b04-b491-942350d43852 - - - - - -] DHCP configuration for ports {'fa72c957-ea8c-40a3-b996-de17b7baf07c'} is completed#033[00m Oct 5 06:11:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:11.487 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:11:10Z, description=, device_id=d6fe07db-ad41-4208-a06c-26ac9533f19d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fa72c957-ea8c-40a3-b996-de17b7baf07c, ip_allocation=immediate, mac_address=fa:16:3e:0c:dc:e5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:11:00Z, description=, dns_domain=, id=cd0ea47a-53ae-418e-8451-447d9332d71e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1460843989, port_security_enabled=True, project_id=96f9484619534895a7ead96d38119886, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17836, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3079, status=ACTIVE, subnets=['21aa8e87-752e-4c10-a680-c0996e9dee24'], tags=[], tenant_id=96f9484619534895a7ead96d38119886, updated_at=2025-10-05T10:11:01Z, vlan_transparent=None, network_id=cd0ea47a-53ae-418e-8451-447d9332d71e, port_security_enabled=False, project_id=96f9484619534895a7ead96d38119886, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3123, status=DOWN, tags=[], tenant_id=96f9484619534895a7ead96d38119886, updated_at=2025-10-05T10:11:10Z on network cd0ea47a-53ae-418e-8451-447d9332d71e#033[00m Oct 5 06:11:11 localhost systemd[1]: tmp-crun.uHdTHe.mount: Deactivated successfully. Oct 5 06:11:11 localhost dnsmasq[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/addn_hosts - 1 addresses Oct 5 06:11:11 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/host Oct 5 06:11:11 localhost podman[339143]: 2025-10-05 10:11:11.725160851 +0000 UTC m=+0.086030471 container kill 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:11:11 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/opts Oct 5 06:11:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:11 localhost podman[339183]: Oct 5 06:11:11 localhost podman[339183]: 2025-10-05 10:11:11.951270842 +0000 UTC m=+0.092309530 container create 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:11:11 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:11.974 272040 INFO neutron.agent.dhcp.agent [None req-c70556bd-7f38-4e88-bd67-e8b479c9ba3d - - - - - -] DHCP configuration for ports {'fa72c957-ea8c-40a3-b996-de17b7baf07c'} is completed#033[00m Oct 5 06:11:11 localhost systemd[1]: Started libpod-conmon-9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0.scope. Oct 5 06:11:12 localhost podman[339183]: 2025-10-05 10:11:11.907974079 +0000 UTC m=+0.049012757 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:11:12 localhost systemd[1]: Started libcrun container. Oct 5 06:11:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05039b9d08a2cdd89329f1768cdfe5114079d3d2c3d44c8c7c876a08e32ec77c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:11:12 localhost podman[339183]: 2025-10-05 10:11:12.024570759 +0000 UTC m=+0.165609437 container init 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:12 localhost podman[339183]: 2025-10-05 10:11:12.033411837 +0000 UTC m=+0.174450525 container start 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:11:12 localhost dnsmasq[339205]: started, version 2.85 cachesize 150 Oct 5 06:11:12 localhost dnsmasq[339205]: DNS service limited to local subnets Oct 5 06:11:12 localhost dnsmasq[339205]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:11:12 localhost dnsmasq[339205]: warning: no upstream servers configured Oct 5 06:11:12 localhost dnsmasq-dhcp[339205]: DHCP, static leases only on 10.101.0.0, lease time 1d Oct 5 06:11:12 localhost dnsmasq[339205]: read /var/lib/neutron/dhcp/ca7ee42f-9f55-45fd-9b2f-9992b734b00d/addn_hosts - 0 addresses Oct 5 06:11:12 localhost dnsmasq-dhcp[339205]: read /var/lib/neutron/dhcp/ca7ee42f-9f55-45fd-9b2f-9992b734b00d/host Oct 5 06:11:12 localhost dnsmasq-dhcp[339205]: read /var/lib/neutron/dhcp/ca7ee42f-9f55-45fd-9b2f-9992b734b00d/opts Oct 5 06:11:12 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:12.224 272040 INFO neutron.agent.dhcp.agent [None req-d1c4c3f0-82e9-4704-8236-7d8a722f086d - - - - - -] DHCP configuration for ports {'99614054-83ed-4927-ae35-70d184c99bc1'} is completed#033[00m Oct 5 06:11:12 localhost dnsmasq[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/addn_hosts - 0 addresses Oct 5 06:11:12 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/host Oct 5 06:11:12 localhost podman[339223]: 2025-10-05 10:11:12.512475459 +0000 UTC m=+0.065162521 container kill 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:11:12 localhost dnsmasq-dhcp[339002]: read /var/lib/neutron/dhcp/cd0ea47a-53ae-418e-8451-447d9332d71e/opts Oct 5 06:11:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Oct 5 06:11:12 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 5 06:11:12 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Oct 5 06:11:12 localhost ovn_controller[157794]: 2025-10-05T10:11:12Z|00404|binding|INFO|Releasing lport 88d11f4d-8d42-4618-8332-3a3366f3b665 from this chassis (sb_readonly=0) Oct 5 06:11:12 localhost nova_compute[297021]: 2025-10-05 10:11:12.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:12 localhost kernel: device tap88d11f4d-8d left promiscuous mode Oct 5 06:11:12 localhost ovn_controller[157794]: 2025-10-05T10:11:12Z|00405|binding|INFO|Setting lport 88d11f4d-8d42-4618-8332-3a3366f3b665 down in Southbound Oct 5 06:11:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:12.798 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-cd0ea47a-53ae-418e-8451-447d9332d71e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd0ea47a-53ae-418e-8451-447d9332d71e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96f9484619534895a7ead96d38119886', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fde5f049-05be-4e9b-b732-c5b9f1a47844, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=88d11f4d-8d42-4618-8332-3a3366f3b665) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:12.800 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 88d11f4d-8d42-4618-8332-3a3366f3b665 in datapath cd0ea47a-53ae-418e-8451-447d9332d71e unbound from our chassis#033[00m Oct 5 06:11:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:12.802 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd0ea47a-53ae-418e-8451-447d9332d71e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:11:12 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:12.803 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[f922c8ed-86bc-4ede-a21d-9633f80490d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:11:12 localhost nova_compute[297021]: 2025-10-05 10:11:12.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:13 localhost dnsmasq[339205]: exiting on receipt of SIGTERM Oct 5 06:11:13 localhost podman[339261]: 2025-10-05 10:11:13.353661983 +0000 UTC m=+0.070830833 container kill 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:11:13 localhost systemd[1]: libpod-9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0.scope: Deactivated successfully. Oct 5 06:11:13 localhost ovn_controller[157794]: 2025-10-05T10:11:13Z|00406|binding|INFO|Removing iface tap24ee3b1e-9e ovn-installed in OVS Oct 5 06:11:13 localhost ovn_controller[157794]: 2025-10-05T10:11:13Z|00407|binding|INFO|Removing lport 24ee3b1e-9e60-434f-9428-ec34159ea056 ovn-installed in OVS Oct 5 06:11:13 localhost nova_compute[297021]: 2025-10-05 10:11:13.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:13 localhost nova_compute[297021]: 2025-10-05 10:11:13.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:13.404 163434 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a4fb8407-8950-4a9b-b978-49c3adcaa2b8 with type ""#033[00m Oct 5 06:11:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:13.413 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-ca7ee42f-9f55-45fd-9b2f-9992b734b00d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca7ee42f-9f55-45fd-9b2f-9992b734b00d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96f9484619534895a7ead96d38119886', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a44e341-ad1d-47da-8e38-02bdb3f2eab4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=24ee3b1e-9e60-434f-9428-ec34159ea056) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:13.415 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 24ee3b1e-9e60-434f-9428-ec34159ea056 in datapath ca7ee42f-9f55-45fd-9b2f-9992b734b00d unbound from our chassis#033[00m Oct 5 06:11:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:13.416 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca7ee42f-9f55-45fd-9b2f-9992b734b00d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:11:13 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:13.417 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[c41a8776-12c5-48ff-9410-4b62f82fbcb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:11:13 localhost podman[339275]: 2025-10-05 10:11:13.470550501 +0000 UTC m=+0.089646117 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Oct 5 06:11:13 localhost podman[339273]: 2025-10-05 10:11:13.489760697 +0000 UTC m=+0.118245475 container died 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:11:13 localhost podman[339273]: 2025-10-05 10:11:13.527705516 +0000 UTC m=+0.156190244 container cleanup 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:11:13 localhost systemd[1]: libpod-conmon-9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0.scope: Deactivated successfully. Oct 5 06:11:13 localhost podman[339275]: 2025-10-05 10:11:13.537817617 +0000 UTC m=+0.156913253 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:13 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:11:13 localhost podman[339280]: 2025-10-05 10:11:13.626321724 +0000 UTC m=+0.240420896 container remove 9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca7ee42f-9f55-45fd-9b2f-9992b734b00d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:13 localhost nova_compute[297021]: 2025-10-05 10:11:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:13 localhost kernel: device tap24ee3b1e-9e left promiscuous mode Oct 5 06:11:13 localhost nova_compute[297021]: 2025-10-05 10:11:13.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:13 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:13.675 272040 INFO neutron.agent.dhcp.agent [None req-6342a451-42c9-4e89-8727-1a8e9c7f327b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:11:13 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:13.676 272040 INFO neutron.agent.dhcp.agent [None req-6342a451-42c9-4e89-8727-1a8e9c7f327b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:11:13 localhost systemd[1]: var-lib-containers-storage-overlay-05039b9d08a2cdd89329f1768cdfe5114079d3d2c3d44c8c7c876a08e32ec77c-merged.mount: Deactivated successfully. Oct 5 06:11:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e2e925717e5f7c6562ff6d5fb4dc8c917ff0bb22e5612d1759b70dd025e02d0-userdata-shm.mount: Deactivated successfully. Oct 5 06:11:13 localhost systemd[1]: run-netns-qdhcp\x2dca7ee42f\x2d9f55\x2d45fd\x2d9b2f\x2d9992b734b00d.mount: Deactivated successfully. Oct 5 06:11:13 localhost ovn_controller[157794]: 2025-10-05T10:11:13Z|00408|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:11:13 localhost nova_compute[297021]: 2025-10-05 10:11:13.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:14 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:15 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:16 localhost nova_compute[297021]: 2025-10-05 10:11:16.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:16 localhost dnsmasq[339002]: exiting on receipt of SIGTERM Oct 5 06:11:16 localhost podman[339333]: 2025-10-05 10:11:16.327654582 +0000 UTC m=+0.074440720 container kill 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:11:16 localhost systemd[1]: libpod-3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148.scope: Deactivated successfully. Oct 5 06:11:16 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:16 localhost podman[339347]: 2025-10-05 10:11:16.411216126 +0000 UTC m=+0.066250970 container died 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:11:16 localhost nova_compute[297021]: 2025-10-05 10:11:16.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:16 localhost nova_compute[297021]: 2025-10-05 10:11:16.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:16 localhost nova_compute[297021]: 2025-10-05 10:11:16.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:16 localhost nova_compute[297021]: 2025-10-05 10:11:16.424 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:11:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148-userdata-shm.mount: Deactivated successfully. Oct 5 06:11:16 localhost podman[339347]: 2025-10-05 10:11:16.445434744 +0000 UTC m=+0.100469538 container cleanup 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:16 localhost systemd[1]: libpod-conmon-3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148.scope: Deactivated successfully. Oct 5 06:11:16 localhost podman[339355]: 2025-10-05 10:11:16.493672269 +0000 UTC m=+0.135932550 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:11:16 localhost podman[339355]: 2025-10-05 10:11:16.536807017 +0000 UTC m=+0.179067278 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9) Oct 5 06:11:16 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:11:16 localhost podman[339349]: 2025-10-05 10:11:16.591964708 +0000 UTC m=+0.241001192 container remove 3ffa0ddd381627a9c1049ecc0b836552456dfbc810b95b16a9b0f210044db148 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cd0ea47a-53ae-418e-8451-447d9332d71e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:11:16 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:16.627 272040 INFO neutron.agent.dhcp.agent [None req-3a0bc637-c9bb-437e-b96a-1bfada0ccf50 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:11:16 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:11:16.629 272040 INFO neutron.agent.dhcp.agent [None req-3a0bc637-c9bb-437e-b96a-1bfada0ccf50 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:11:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:11:16 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:16 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:17 localhost systemd[1]: var-lib-containers-storage-overlay-8a8c77fbb10347141de53f5be82a1ff81b7a504737dfaa33ad3bbb1f433d3479-merged.mount: Deactivated successfully. Oct 5 06:11:17 localhost systemd[1]: run-netns-qdhcp\x2dcd0ea47a\x2d53ae\x2d418e\x2d8451\x2d447d9332d71e.mount: Deactivated successfully. Oct 5 06:11:17 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:17 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:17 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:17 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:17 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:17 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:18 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:18 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:18 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:18 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:19 localhost nova_compute[297021]: 2025-10-05 10:11:19.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:19 localhost nova_compute[297021]: 2025-10-05 10:11:19.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Oct 5 06:11:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 5 06:11:19 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Oct 5 06:11:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:20 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:20 localhost nova_compute[297021]: 2025-10-05 10:11:20.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:20.472 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:11:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:20.473 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:11:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:20.473 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:11:20 localhost podman[339395]: 2025-10-05 10:11:20.6790202 +0000 UTC m=+0.082939428 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:11:20 localhost podman[339395]: 2025-10-05 10:11:20.712204531 +0000 UTC m=+0.116123739 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:11:20 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:11:21 localhost nova_compute[297021]: 2025-10-05 10:11:21.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:21 localhost nova_compute[297021]: 2025-10-05 10:11:21.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:21 localhost nova_compute[297021]: 2025-10-05 10:11:21.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:21 localhost nova_compute[297021]: 2025-10-05 10:11:21.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:21 localhost nova_compute[297021]: 2025-10-05 10:11:21.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:21 localhost nova_compute[297021]: 2025-10-05 10:11:21.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:21 localhost podman[248506]: time="2025-10-05T10:11:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:11:21 localhost podman[248506]: @ - - [05/Oct/2025:10:11:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:11:21 localhost podman[248506]: @ - - [05/Oct/2025:10:11:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19396 "" "Go-http-client/1.1" Oct 5 06:11:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:22 localhost openstack_network_exporter[250601]: ERROR 10:11:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:11:22 localhost openstack_network_exporter[250601]: ERROR 10:11:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:22 localhost openstack_network_exporter[250601]: ERROR 10:11:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:22 localhost openstack_network_exporter[250601]: ERROR 10:11:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:11:22 localhost openstack_network_exporter[250601]: Oct 5 06:11:22 localhost openstack_network_exporter[250601]: ERROR 10:11:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:11:22 localhost openstack_network_exporter[250601]: Oct 5 06:11:23 localhost nova_compute[297021]: 2025-10-05 10:11:23.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:11:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:23 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:23 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:23 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:23 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:23 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Oct 5 06:11:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 5 06:11:23 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Oct 5 06:11:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.448 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.449 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.450 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:11:24 localhost systemd[1]: tmp-crun.mAjbNO.mount: Deactivated successfully. Oct 5 06:11:24 localhost podman[339429]: 2025-10-05 10:11:24.698283282 +0000 UTC m=+0.098625528 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:11:24 localhost podman[339429]: 2025-10-05 10:11:24.705463885 +0000 UTC m=+0.105806161 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:11:24 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:11:24 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2134894989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:11:24 localhost nova_compute[297021]: 2025-10-05 10:11:24.939 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.049 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.049 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.272 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.274 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11090MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.275 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.276 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.401 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.401 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.402 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.468 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:11:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e232 do_prune osdmap full prune enabled Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.832863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085832934, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1859, "num_deletes": 268, "total_data_size": 1767720, "memory_usage": 1801120, "flush_reason": "Manual Compaction"} Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Oct 5 06:11:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e233 e233: 6 total, 6 up, 6 in Oct 5 06:11:25 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e233: 6 total, 6 up, 6 in Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085847616, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1732544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33763, "largest_seqno": 35621, "table_properties": {"data_size": 1724102, "index_size": 4951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 22012, "raw_average_key_size": 22, "raw_value_size": 1705770, "raw_average_value_size": 1767, "num_data_blocks": 208, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659006, "oldest_key_time": 1759659006, "file_creation_time": 1759659085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 14852 microseconds, and 5926 cpu microseconds. Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.847717) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1732544 bytes OK Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.847745) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.850137) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.850161) EVENT_LOG_v1 {"time_micros": 1759659085850154, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.850185) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1758891, prev total WAL file size 1758932, number of live WAL files 2. Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.851032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1691KB)], [63(15MB)] Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085851132, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18489258, "oldest_snapshot_seqno": -1} Oct 5 06:11:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:11:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2385016833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.928 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.935 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.963 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.965 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:11:25 localhost nova_compute[297021]: 2025-10-05 10:11:25.966 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 13555 keys, 17192920 bytes, temperature: kUnknown Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085994609, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 17192920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17115148, "index_size": 42791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33925, "raw_key_size": 365259, "raw_average_key_size": 26, "raw_value_size": 16884092, "raw_average_value_size": 1245, "num_data_blocks": 1587, "num_entries": 13555, "num_filter_entries": 13555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759659085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.995105) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 17192920 bytes Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.997349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.7 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 16.0 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(20.6) write-amplify(9.9) OK, records in: 14108, records dropped: 553 output_compression: NoCompression Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.997380) EVENT_LOG_v1 {"time_micros": 1759659085997366, "job": 38, "event": "compaction_finished", "compaction_time_micros": 143637, "compaction_time_cpu_micros": 51956, "output_level": 6, "num_output_files": 1, "total_output_size": 17192920, "num_input_records": 14108, "num_output_records": 13555, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:11:25 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085997826, "job": 38, "event": "table_file_deletion", "file_number": 65} Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659086000244, "job": 38, "event": "table_file_deletion", "file_number": 63} Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:25.850875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:26.000346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:26.000354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:26.000357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:26.000361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:11:26.000364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost nova_compute[297021]: 2025-10-05 10:11:26.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:26 localhost nova_compute[297021]: 2025-10-05 10:11:26.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:26 localhost nova_compute[297021]: 2025-10-05 10:11:26.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:26 localhost nova_compute[297021]: 2025-10-05 10:11:26.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:26 localhost nova_compute[297021]: 2025-10-05 10:11:26.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:26 localhost nova_compute[297021]: 2025-10-05 10:11:26.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:26 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:26 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:27 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:27 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:27 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e233 do_prune osdmap full prune enabled Oct 5 06:11:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e234 e234: 6 total, 6 up, 6 in Oct 5 06:11:27 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e234: 6 total, 6 up, 6 in Oct 5 06:11:27 localhost nova_compute[297021]: 2025-10-05 10:11:27.962 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:27 localhost nova_compute[297021]: 2025-10-05 10:11:27.986 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:27 localhost nova_compute[297021]: 2025-10-05 10:11:27.986 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:11:27 localhost nova_compute[297021]: 2025-10-05 10:11:27.987 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.052 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.053 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.053 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.054 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.532 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.553 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:11:28 localhost nova_compute[297021]: 2025-10-05 10:11:28.554 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:11:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e234 do_prune osdmap full prune enabled Oct 5 06:11:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e235 e235: 6 total, 6 up, 6 in Oct 5 06:11:28 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e235: 6 total, 6 up, 6 in Oct 5 06:11:30 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:11:30 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:11:30 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:30 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:31 localhost nova_compute[297021]: 2025-10-05 10:11:31.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:31 localhost nova_compute[297021]: 2025-10-05 10:11:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:31 localhost nova_compute[297021]: 2025-10-05 10:11:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:31 localhost nova_compute[297021]: 2025-10-05 10:11:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:31 localhost nova_compute[297021]: 2025-10-05 10:11:31.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:31 localhost nova_compute[297021]: 2025-10-05 10:11:31.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:11:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:11:31 localhost systemd[1]: tmp-crun.V1eW8X.mount: Deactivated successfully. Oct 5 06:11:31 localhost podman[339485]: 2025-10-05 10:11:31.685657647 +0000 UTC m=+0.092370732 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:11:31 localhost podman[339485]: 2025-10-05 10:11:31.724123782 +0000 UTC m=+0.130836877 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:11:31 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:11:31 localhost podman[339486]: 2025-10-05 10:11:31.729604199 +0000 UTC m=+0.129949915 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:31 localhost podman[339486]: 2025-10-05 10:11:31.813820442 +0000 UTC m=+0.214166138 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true) Oct 5 06:11:31 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:11:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:33 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:35 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:35 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:35 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:35 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:35.224 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:35 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:35.225 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:11:35 localhost nova_compute[297021]: 2025-10-05 10:11:35.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:36 localhost nova_compute[297021]: 2025-10-05 10:11:36.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:11:36 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:11:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e235 do_prune osdmap full prune enabled Oct 5 06:11:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 e236: 6 total, 6 up, 6 in Oct 5 06:11:36 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e236: 6 total, 6 up, 6 in Oct 5 06:11:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:11:36 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:11:37 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:11:37 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:11:37 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:11:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:11:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:11:38 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:38 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:38 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:38 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:11:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:38 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:38 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:11:38 localhost podman[339610]: 2025-10-05 10:11:38.682436481 +0000 UTC m=+0.089231469 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:11:38 localhost podman[339610]: 2025-10-05 10:11:38.712862962 +0000 UTC m=+0.119657930 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent) Oct 5 06:11:38 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:11:39 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:39 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:39 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:39 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:41 localhost nova_compute[297021]: 2025-10-05 10:11:41.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:41 localhost nova_compute[297021]: 2025-10-05 10:11:41.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:41 localhost nova_compute[297021]: 2025-10-05 10:11:41.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:41 localhost nova_compute[297021]: 2025-10-05 10:11:41.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:41 localhost nova_compute[297021]: 2025-10-05 10:11:41.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:41 localhost nova_compute[297021]: 2025-10-05 10:11:41.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:11:41 localhost systemd[1]: tmp-crun.VSBik4.mount: Deactivated successfully. Oct 5 06:11:41 localhost podman[339628]: 2025-10-05 10:11:41.67670173 +0000 UTC m=+0.084550615 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:11:41 localhost podman[339628]: 2025-10-05 10:11:41.725987913 +0000 UTC m=+0.133836788 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:11:41 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:11:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} v 0) Oct 5 06:11:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} : dispatch Oct 5 06:11:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"}]': finished Oct 5 06:11:43 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} : dispatch Oct 5 06:11:43 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} : dispatch Oct 5 06:11:43 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} : dispatch Oct 5 06:11:43 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"}]': finished Oct 5 06:11:43 localhost ovn_metadata_agent[163429]: 2025-10-05 10:11:43.227 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:11:43 localhost systemd[1]: tmp-crun.zo2QiY.mount: Deactivated successfully. Oct 5 06:11:43 localhost podman[339653]: 2025-10-05 10:11:43.695962316 +0000 UTC m=+0.104312080 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 06:11:43 localhost podman[339653]: 2025-10-05 10:11:43.712897397 +0000 UTC m=+0.121247141 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:43 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:11:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:11:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:11:44 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:44 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:44 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:11:45 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:45 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:46 localhost nova_compute[297021]: 2025-10-05 10:11:46.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:46 localhost nova_compute[297021]: 2025-10-05 10:11:46.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:46 localhost nova_compute[297021]: 2025-10-05 10:11:46.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:46 localhost nova_compute[297021]: 2025-10-05 10:11:46.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:46 localhost nova_compute[297021]: 2025-10-05 10:11:46.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:46 localhost nova_compute[297021]: 2025-10-05 10:11:46.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:11:46 localhost podman[339672]: 2025-10-05 10:11:46.686520467 +0000 UTC m=+0.091899550 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64) Oct 5 06:11:46 localhost podman[339672]: 2025-10-05 10:11:46.702868383 +0000 UTC m=+0.108247516 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter) Oct 5 06:11:46 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:11:46 localhost ovn_controller[157794]: 2025-10-05T10:11:46Z|00409|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory Oct 5 06:11:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:47 localhost sshd[339693]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:11:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:48 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:48 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:48 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:49 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:49 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:49 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:49 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:50 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:50 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:51 localhost nova_compute[297021]: 2025-10-05 10:11:51.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:51 localhost nova_compute[297021]: 2025-10-05 10:11:51.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:51 localhost nova_compute[297021]: 2025-10-05 10:11:51.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:51 localhost nova_compute[297021]: 2025-10-05 10:11:51.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:51 localhost nova_compute[297021]: 2025-10-05 10:11:51.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:51 localhost nova_compute[297021]: 2025-10-05 10:11:51.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:51 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:11:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:11:51 localhost podman[248506]: time="2025-10-05T10:11:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:11:51 localhost podman[248506]: @ - - [05/Oct/2025:10:11:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:11:51 localhost podman[248506]: @ - - [05/Oct/2025:10:11:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19395 "" "Go-http-client/1.1" Oct 5 06:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:11:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:51 localhost podman[339695]: 2025-10-05 10:11:51.674961811 +0000 UTC m=+0.084929984 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:11:51 localhost podman[339695]: 2025-10-05 10:11:51.684532666 +0000 UTC m=+0.094500859 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:11:51 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:11:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:52 localhost openstack_network_exporter[250601]: ERROR 10:11:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:11:52 localhost openstack_network_exporter[250601]: ERROR 10:11:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:52 localhost openstack_network_exporter[250601]: ERROR 10:11:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:52 localhost openstack_network_exporter[250601]: ERROR 10:11:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:11:52 localhost openstack_network_exporter[250601]: Oct 5 06:11:52 localhost openstack_network_exporter[250601]: ERROR 10:11:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:11:52 localhost openstack_network_exporter[250601]: Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:55 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:11:55 localhost podman[339719]: 2025-10-05 10:11:55.682143581 +0000 UTC m=+0.088804987 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:11:55 localhost podman[339719]: 2025-10-05 10:11:55.695754744 +0000 UTC m=+0.102416140 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:11:55 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:11:56 localhost nova_compute[297021]: 2025-10-05 10:11:56.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:56 localhost nova_compute[297021]: 2025-10-05 10:11:56.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:11:56 localhost nova_compute[297021]: 2025-10-05 10:11:56.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:11:56 localhost nova_compute[297021]: 2025-10-05 10:11:56.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:56 localhost nova_compute[297021]: 2025-10-05 10:11:56.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:56 localhost nova_compute[297021]: 2025-10-05 10:11:56.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:11:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} v 0) Oct 5 06:11:57 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} : dispatch Oct 5 06:11:57 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"}]': finished Oct 5 06:11:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:11:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"}]': finished Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:58 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:59 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Oct 5 06:12:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:01 localhost nova_compute[297021]: 2025-10-05 10:12:01.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:01 localhost nova_compute[297021]: 2025-10-05 10:12:01.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:01 localhost nova_compute[297021]: 2025-10-05 10:12:01.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:01 localhost nova_compute[297021]: 2025-10-05 10:12:01.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:01 localhost nova_compute[297021]: 2025-10-05 10:12:01.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:01 localhost nova_compute[297021]: 2025-10-05 10:12:01.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Oct 5 06:12:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 5 06:12:01 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Oct 5 06:12:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:12:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:12:02 localhost podman[339742]: 2025-10-05 10:12:02.673812229 +0000 UTC m=+0.076724426 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:12:02 localhost podman[339742]: 2025-10-05 10:12:02.6877242 +0000 UTC m=+0.090636357 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:12:02 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:12:02 localhost podman[339743]: 2025-10-05 10:12:02.740295941 +0000 UTC m=+0.137417224 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:12:02 localhost podman[339743]: 2025-10-05 10:12:02.751837738 +0000 UTC m=+0.148959031 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:12:02 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:12:04 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Oct 5 06:12:04 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:12:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:05 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:05 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:05 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:05 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:06 localhost nova_compute[297021]: 2025-10-05 10:12:06.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:06 localhost nova_compute[297021]: 2025-10-05 10:12:06.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:06 localhost nova_compute[297021]: 2025-10-05 10:12:06.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:06 localhost nova_compute[297021]: 2025-10-05 10:12:06.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:06 localhost nova_compute[297021]: 2025-10-05 10:12:06.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:06 localhost nova_compute[297021]: 2025-10-05 10:12:06.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:07 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:07 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:08 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:12:09 localhost podman[339778]: 2025-10-05 10:12:09.676271424 +0000 UTC m=+0.086725281 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:12:09 localhost podman[339778]: 2025-10-05 10:12:09.686149528 +0000 UTC m=+0.096603405 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:12:09 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:12:11 localhost nova_compute[297021]: 2025-10-05 10:12:11.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:11 localhost nova_compute[297021]: 2025-10-05 10:12:11.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:11 localhost nova_compute[297021]: 2025-10-05 10:12:11.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:11 localhost nova_compute[297021]: 2025-10-05 10:12:11.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:11 localhost nova_compute[297021]: 2025-10-05 10:12:11.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:11 localhost nova_compute[297021]: 2025-10-05 10:12:11.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:12 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:12 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:12 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:12 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:12:12 localhost podman[339796]: 2025-10-05 10:12:12.669821645 +0000 UTC m=+0.081372639 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:12:12 localhost podman[339796]: 2025-10-05 10:12:12.743168279 +0000 UTC m=+0.154719273 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 5 06:12:12 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:12:14 localhost podman[339820]: 2025-10-05 10:12:14.676230429 +0000 UTC m=+0.084746439 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:12:14 localhost podman[339820]: 2025-10-05 10:12:14.715837014 +0000 UTC m=+0.124353004 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:12:14 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:12:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:16 localhost nova_compute[297021]: 2025-10-05 10:12:16.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:12:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:17 localhost systemd[1]: tmp-crun.V54oVI.mount: Deactivated successfully. Oct 5 06:12:17 localhost podman[339839]: 2025-10-05 10:12:17.022345827 +0000 UTC m=+0.094758046 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc.) Oct 5 06:12:17 localhost podman[339839]: 2025-10-05 10:12:17.038935579 +0000 UTC m=+0.111347798 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 06:12:17 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:12:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:18 localhost nova_compute[297021]: 2025-10-05 10:12:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:18 localhost nova_compute[297021]: 2025-10-05 10:12:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:19 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:19 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:19 localhost nova_compute[297021]: 2025-10-05 10:12:19.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:19 localhost nova_compute[297021]: 2025-10-05 10:12:19.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:12:20.473 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:12:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:12:20.474 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:12:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:12:20.475 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:12:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Oct 5 06:12:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 5 06:12:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5033 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:21 localhost podman[248506]: time="2025-10-05T10:12:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:21 localhost nova_compute[297021]: 2025-10-05 10:12:21.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:21 localhost podman[248506]: @ - - [05/Oct/2025:10:12:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:12:21 localhost podman[248506]: @ - - [05/Oct/2025:10:12:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19395 "" "Go-http-client/1.1" Oct 5 06:12:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:21 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:22 localhost openstack_network_exporter[250601]: ERROR 10:12:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:22 localhost openstack_network_exporter[250601]: ERROR 10:12:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:12:22 localhost openstack_network_exporter[250601]: ERROR 10:12:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:22 localhost openstack_network_exporter[250601]: ERROR 10:12:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:12:22 localhost openstack_network_exporter[250601]: Oct 5 06:12:22 localhost openstack_network_exporter[250601]: ERROR 10:12:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:12:22 localhost openstack_network_exporter[250601]: Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:12:22 localhost podman[339858]: 2025-10-05 10:12:22.672412955 +0000 UTC m=+0.078969576 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:12:22 localhost podman[339858]: 2025-10-05 10:12:22.685164274 +0000 UTC m=+0.091720905 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:12:22 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:12:23 localhost nova_compute[297021]: 2025-10-05 10:12:23.443 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:12:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:12:25 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:25 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:25 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:25 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.449 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.450 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.450 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:12:25 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:12:25 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3241994964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:12:25 localhost nova_compute[297021]: 2025-10-05 10:12:25.944 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.325 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.326 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.591 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.593 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11074MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.593 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:12:26 localhost nova_compute[297021]: 2025-10-05 10:12:26.594 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:12:26 localhost podman[339904]: 2025-10-05 10:12:26.679734448 +0000 UTC m=+0.084575965 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:12:26 localhost podman[339904]: 2025-10-05 10:12:26.687326621 +0000 UTC m=+0.092168138 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:12:26 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:12:26 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:27 localhost nova_compute[297021]: 2025-10-05 10:12:27.205 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:12:27 localhost nova_compute[297021]: 2025-10-05 10:12:27.206 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:12:27 localhost nova_compute[297021]: 2025-10-05 10:12:27.207 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:12:27 localhost nova_compute[297021]: 2025-10-05 10:12:27.734 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:12:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:28 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:28 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:28 localhost nova_compute[297021]: 2025-10-05 10:12:28.413 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:12:28 localhost nova_compute[297021]: 2025-10-05 10:12:28.414 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:12:28 localhost nova_compute[297021]: 2025-10-05 10:12:28.433 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:12:28 localhost nova_compute[297021]: 2025-10-05 10:12:28.462 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:12:28 localhost nova_compute[297021]: 2025-10-05 10:12:28.521 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:12:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:12:28 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3265320258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:12:29 localhost nova_compute[297021]: 2025-10-05 10:12:29.013 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:12:29 localhost nova_compute[297021]: 2025-10-05 10:12:29.020 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:12:29 localhost nova_compute[297021]: 2025-10-05 10:12:29.057 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:12:29 localhost nova_compute[297021]: 2025-10-05 10:12:29.059 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:12:29 localhost nova_compute[297021]: 2025-10-05 10:12:29.060 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:12:29 localhost nova_compute[297021]: 2025-10-05 10:12:29.061 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:29 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:29 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:29 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:29 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.432 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.433 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.433 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.542 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.545 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.546 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:12:30 localhost nova_compute[297021]: 2025-10-05 10:12:30.546 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.541 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.555 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.556 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.556 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.556 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:12:31 localhost nova_compute[297021]: 2025-10-05 10:12:31.573 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:12:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:12:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:12:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:32 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:32 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:32 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:32 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:12:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:12:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:12:33 localhost podman[339950]: 2025-10-05 10:12:33.676557123 +0000 UTC m=+0.080898007 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:12:33 localhost podman[339950]: 2025-10-05 10:12:33.690788732 +0000 UTC m=+0.095129606 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:12:33 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:12:33 localhost podman[339951]: 2025-10-05 10:12:33.785094576 +0000 UTC m=+0.183993445 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:12:33 localhost podman[339951]: 2025-10-05 10:12:33.817621712 +0000 UTC m=+0.216520561 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd) Oct 5 06:12:33 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:12:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:35 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:35 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:36 localhost nova_compute[297021]: 2025-10-05 10:12:36.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:36 localhost nova_compute[297021]: 2025-10-05 10:12:36.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:36 localhost nova_compute[297021]: 2025-10-05 10:12:36.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:36 localhost nova_compute[297021]: 2025-10-05 10:12:36.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:36 localhost nova_compute[297021]: 2025-10-05 10:12:36.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:36 localhost nova_compute[297021]: 2025-10-05 10:12:36.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:36 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:12:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:12:38 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:12:38 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:12:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:12:38 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:38 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.841 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.841 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.842 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.846 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '76000255-b203-4885-ad7d-53934aeb74bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.842261', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd264b5e2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '9b007790d5db47acb31d21e6f88011e94ec350a9d7df4356ee2d4301a50da7b7'}]}, 'timestamp': '2025-10-05 10:12:38.847612', '_unique_id': 'c1fece5d73d0480ea76ee4ae3744428d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.849 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.851 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.851 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae6f9dfd-57d7-40f4-9f2d-e9076c1f9c27', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.851253', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd2655f92-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '714b5ae7ca15d10ef822108ac5f7c2759203944d7a5cf8810ee101785c9b8a7d'}]}, 'timestamp': '2025-10-05 10:12:38.851837', '_unique_id': 'a8360ce952fa45f88c0cddc8265349ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.852 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.853 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.875 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4aabd3f7-dea7-4162-be45-d9e2258748cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.854114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2691e02-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': 'cf9c5ae4e696e0a482fe75776a18351d03388182b159a7ea88a82c258520f1d1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.854114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26925aa-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '1a95b807f137099b7022aab7e47ab8838667f927d417e73e712d4eb446656d94'}]}, 'timestamp': '2025-10-05 10:12:38.876424', '_unique_id': '0798ef865744406caa94030c4ae1dbcd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.877 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4be877c3-9ca7-4d0e-92d3-48f89fef7ea7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.877453', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd269569c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '145a5261da7e5370e430a60df0c97b89d13ee2bbca7357142964535036b546c3'}]}, 'timestamp': '2025-10-05 10:12:38.877663', '_unique_id': 'b6feb93c1c9e481d909728c66fd5daed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.878 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a80d7000-26ca-41c9-91f2-32cc4cf9a67b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:12:38.878591', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'd26c0504-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.118832727, 'message_signature': '7bd5a0528bc494624f516308d7dc669d2a9b4709d7866a1cce485408c2f99bea'}]}, 'timestamp': '2025-10-05 10:12:38.895232', '_unique_id': '335575bbea3846a59db52bb7410283c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.895 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.896 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '56e237e1-cf91-4be7-9bfe-4646acacab98', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.896195', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26db958-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.120069959, 'message_signature': '02b31b6f39433e0894ea2af6683130d41bc8aae0a445b8d309ba067ff751fa79'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.896195', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26dc178-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.120069959, 'message_signature': '1a4d50bf4e2bb5d77963f7981506cd7e32c7c87448162916ace00f65f9141ae5'}]}, 'timestamp': '2025-10-05 10:12:38.906598', '_unique_id': '4968cfbbf41f4f42a2c4f153c83eadbd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.907 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3811a43a-9a8c-475f-906c-a63735d07c7d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.907573', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd26def4a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '801f012ac6e00c1575dabad02cd44b4940174f6e328410fe1976dccade125a6e'}]}, 'timestamp': '2025-10-05 10:12:38.907786', '_unique_id': '82e53767b10e4e588acc131184face95'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.908 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6072e30f-e44d-4770-a602-b2d48ff53efe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.908700', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26e1af6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '4c5b18aa6d27ab7b9900e02e395d194f8770a4542e1303d8c7c0e544afb1fa2f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.908700', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26e21e0-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': 'f1e83f1e7b5772ef5ddec01d4d3d1d443135e2c8f17d0ba5bfa3a3143d3cef72'}]}, 'timestamp': '2025-10-05 10:12:38.909067', '_unique_id': '8cce047629314994b542df534a48a34d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '64cb9a2e-0692-415d-9a09-a3e3e8da817e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.910004', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26e4ddc-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.120069959, 'message_signature': 'eecb2efdebf7327f7502e20e2f650272d4dbd3725846b1a8b7a53f43fcb1e969'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.910004', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26e54c6-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.120069959, 'message_signature': 'a024cd35e56c4deac34aed42e9327be5c58e30e7f82edef1ed60d4942ba7347f'}]}, 'timestamp': '2025-10-05 10:12:38.910368', '_unique_id': 'c9c66be75afb48d8af402158a3058729'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.911 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.911 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.911 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a3a08e06-713f-41e2-baed-3332f4b0256a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.911319', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26e8216-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '782efc355942df884a5adb6ee7c2cfdb4ade1aef865a108676eab445ab7ddd40'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.911319', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26e891e-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '9c655f66ba05013c5211d489f23a7e74af88ddb239ddb15e7b2aeb011dcf2804'}]}, 'timestamp': '2025-10-05 10:12:38.911708', '_unique_id': '4c81286fd0774e388e4997151ad88233'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36df46e6-438e-47b7-bf11-41b31d2bd8fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.912700', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd26eb72c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '7a7a4ebb3412927a6226adf305eda07dbb7c397c270a94138b98fe540e335ccd'}]}, 'timestamp': '2025-10-05 10:12:38.912901', '_unique_id': '13a01e43576e41828e9d9dd46c4eb465'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.913 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '70218f25-c3df-472d-b123-57d4806724f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.913798', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26ee210-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.120069959, 'message_signature': '664d6df207e5403ef10f150a6c87883cd56e021222481f0dcabeb0715bd36a17'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.913798', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26ee8dc-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.120069959, 'message_signature': 'fe0c82cdf9e861d78bd8e66fb8df538b479402a364bb9ce4b4f8aa4259c7cc94'}]}, 'timestamp': '2025-10-05 10:12:38.914160', '_unique_id': '45385855ac9848dd88eb582408ef86c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.914 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9528c1cc-61e3-4b14-b667-670d8c11aefa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.915159', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26f173a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '6a949287a0373145e357b9ae552c78030647065a943a857b5d2cd5af203a5978'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.915159', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26f1ec4-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '6438be7c7cee8f4d45bf8e16765d00115699215c32aa14ac4f9608442fae8324'}]}, 'timestamp': '2025-10-05 10:12:38.915539', '_unique_id': 'acf7ccc78bca4651832627b561fa315e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.915 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.916 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.916 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.916 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '922af486-bf0a-4482-a60c-96c29c913fee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.916475', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26f4ad4-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': 'f82d7f0afd07545c5e99c2ffa376bfb752404e1dd2068f9f5df0a5a27b2e64af'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.916475', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26f51aa-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '9d57640293eeb57ea2bc23de2bc7a336ea82908ed04e8a78fc811fd7e74c3e93'}]}, 'timestamp': '2025-10-05 10:12:38.916842', '_unique_id': '638673777855453b862a335a985efcfd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.917 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f13f2683-3d5a-445c-86f0-117229ee1a8f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.917765', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd26f7d06-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '334aa473d63c918339c03b5afbe8015729367b280603f05039449384c8f2814c'}]}, 'timestamp': '2025-10-05 10:12:38.917966', '_unique_id': '52c9e3845a8d4d5cabffbc148e711815'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.918 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f0549d9-3329-40a9-95ae-a39872446927', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:12:38.918894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd26fa90c-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '55b31aebc2598729e7d2ff62f3dc59216270cea01a4f179dc7922648ce0439be'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:12:38.918894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd26fafe2-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.07804875, 'message_signature': '6de4f661bd795ffba618fa351be8ae83684093749005d20cb6e9522302f8d6fb'}]}, 'timestamp': '2025-10-05 10:12:38.919254', '_unique_id': '4353611c1adc469b81c283dcbdaea267'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.919 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3733a3f1-c514-4c5a-a5bd-50951b009998', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.920179', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd26fdb70-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '31a91d63b94ecf6b02d3706e15173ff736f4bedf200b3930e00761cf07a1a99f'}]}, 'timestamp': '2025-10-05 10:12:38.920384', '_unique_id': '16ac601aaa3842c7aa1f07d9ae8d61c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.920 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a82d8a25-b688-4ba2-be1e-c853e5f9aca2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.921291', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd2700794-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '6f6fb4df70cb9f09471a647b31caefa33eda3567b7c548417d0a4bb80cdb2f11'}]}, 'timestamp': '2025-10-05 10:12:38.921514', '_unique_id': 'a49c68f1169d4b26b16ecc7ded21d16a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.921 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.922 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.922 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d8b3434-d70a-4a44-bf4e-6b6f583f741a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.922543', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd2703c64-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': 'efb5bf8d2c34487380ec2dd68b689a3a2606872c97bd59285928df1f4e256678'}]}, 'timestamp': '2025-10-05 10:12:38.922867', '_unique_id': '02e47fab7d3d4354b18a89141fc15ed1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.923 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36d7bd7a-7721-4792-b134-2c1b815d881c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:12:38.923777', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': 'd27067e8-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.066190964, 'message_signature': '1785befb05e8d85fad03d611e98e0097d56cb9369c95974b5b1494fc15611881'}]}, 'timestamp': '2025-10-05 10:12:38.923978', '_unique_id': 'fef1518bf5a7468f89bf7c6a21bcdcb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.924 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 18770000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec217a82-225d-4101-9958-5461236f2be0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18770000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:12:38.924864', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'd270924a-a1d3-11f0-9396-fa163ec6f33d', 'monotonic_time': 12638.118832727, 'message_signature': '707bbdc33add99865e87167c75b8c6a8e8a2271165ff0bf0c363675a367005d4'}]}, 'timestamp': '2025-10-05 10:12:38.925057', '_unique_id': '90633ca568f04f10ab736b55b9f97065'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 ERROR oslo_messaging.notify.messaging Oct 5 06:12:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:12:38.925 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:39 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:39 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:39 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:39 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:12:40 localhost podman[340075]: 2025-10-05 10:12:40.693650456 +0000 UTC m=+0.093505333 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:12:40 localhost podman[340075]: 2025-10-05 10:12:40.703923159 +0000 UTC m=+0.103778026 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Oct 5 06:12:40 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:12:41 localhost nova_compute[297021]: 2025-10-05 10:12:41.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:41 localhost nova_compute[297021]: 2025-10-05 10:12:41.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:41 localhost nova_compute[297021]: 2025-10-05 10:12:41.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:41 localhost nova_compute[297021]: 2025-10-05 10:12:41.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:41 localhost nova_compute[297021]: 2025-10-05 10:12:41.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:41 localhost nova_compute[297021]: 2025-10-05 10:12:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:12:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:12:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:12:42.198 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:12:42 localhost ovn_metadata_agent[163429]: 2025-10-05 10:12:42.199 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:12:42 localhost nova_compute[297021]: 2025-10-05 10:12:42.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:12:43 localhost podman[340093]: 2025-10-05 10:12:43.655579353 +0000 UTC m=+0.066590696 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller) Oct 5 06:12:43 localhost podman[340093]: 2025-10-05 10:12:43.729887653 +0000 UTC m=+0.140898966 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2) Oct 5 06:12:43 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:12:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:12:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:12:45 localhost podman[340118]: 2025-10-05 10:12:45.670950576 +0000 UTC m=+0.083833354 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:12:45 localhost podman[340118]: 2025-10-05 10:12:45.681164749 +0000 UTC m=+0.094047567 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:12:45 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:12:45 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:45 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:46 localhost nova_compute[297021]: 2025-10-05 10:12:46.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:12:47 localhost podman[340138]: 2025-10-05 10:12:47.672286526 +0000 UTC m=+0.083527347 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal) Oct 5 06:12:47 localhost podman[340138]: 2025-10-05 10:12:47.690735237 +0000 UTC m=+0.101976048 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 06:12:47 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:12:48 localhost ovn_metadata_agent[163429]: 2025-10-05 10:12:48.200 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:12:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:48 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:48 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e236 do_prune osdmap full prune enabled Oct 5 06:12:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e237 e237: 6 total, 6 up, 6 in Oct 5 06:12:49 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e237: 6 total, 6 up, 6 in Oct 5 06:12:49 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:49 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:49 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:49 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e237 do_prune osdmap full prune enabled Oct 5 06:12:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e238 e238: 6 total, 6 up, 6 in Oct 5 06:12:51 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e238: 6 total, 6 up, 6 in Oct 5 06:12:51 localhost podman[248506]: time="2025-10-05T10:12:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:12:51 localhost podman[248506]: @ - - [05/Oct/2025:10:12:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:12:51 localhost podman[248506]: @ - - [05/Oct/2025:10:12:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19391 "" "Go-http-client/1.1" Oct 5 06:12:51 localhost nova_compute[297021]: 2025-10-05 10:12:51.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:51 localhost nova_compute[297021]: 2025-10-05 10:12:51.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:51 localhost nova_compute[297021]: 2025-10-05 10:12:51.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:51 localhost nova_compute[297021]: 2025-10-05 10:12:51.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:51 localhost nova_compute[297021]: 2025-10-05 10:12:51.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:51 localhost nova_compute[297021]: 2025-10-05 10:12:51.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:52 localhost openstack_network_exporter[250601]: ERROR 10:12:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:52 localhost openstack_network_exporter[250601]: ERROR 10:12:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:52 localhost openstack_network_exporter[250601]: ERROR 10:12:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:12:52 localhost openstack_network_exporter[250601]: ERROR 10:12:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:12:52 localhost openstack_network_exporter[250601]: Oct 5 06:12:52 localhost openstack_network_exporter[250601]: ERROR 10:12:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:12:52 localhost openstack_network_exporter[250601]: Oct 5 06:12:52 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:52 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:52 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:52 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e238 do_prune osdmap full prune enabled Oct 5 06:12:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e239 e239: 6 total, 6 up, 6 in Oct 5 06:12:53 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e239: 6 total, 6 up, 6 in Oct 5 06:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:12:53 localhost podman[340158]: 2025-10-05 10:12:53.667147761 +0000 UTC m=+0.079115339 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:12:53 localhost podman[340158]: 2025-10-05 10:12:53.679827349 +0000 UTC m=+0.091795317 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:12:53 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:12:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e239 do_prune osdmap full prune enabled Oct 5 06:12:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e240 e240: 6 total, 6 up, 6 in Oct 5 06:12:54 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e240: 6 total, 6 up, 6 in Oct 5 06:12:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:56 localhost nova_compute[297021]: 2025-10-05 10:12:56.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:12:56 localhost nova_compute[297021]: 2025-10-05 10:12:56.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:56 localhost nova_compute[297021]: 2025-10-05 10:12:56.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5024 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:12:56 localhost nova_compute[297021]: 2025-10-05 10:12:56.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:56 localhost nova_compute[297021]: 2025-10-05 10:12:56.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:12:56 localhost nova_compute[297021]: 2025-10-05 10:12:56.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e240 do_prune osdmap full prune enabled Oct 5 06:12:56 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e241 e241: 6 total, 6 up, 6 in Oct 5 06:12:56 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e241: 6 total, 6 up, 6 in Oct 5 06:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:12:57 localhost podman[340182]: 2025-10-05 10:12:57.674867815 +0000 UTC m=+0.083906457 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:12:57 localhost podman[340182]: 2025-10-05 10:12:57.686712691 +0000 UTC m=+0.095751343 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:12:57 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:12:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:59 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:59 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:59 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:59 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:01 localhost nova_compute[297021]: 2025-10-05 10:13:01.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:01 localhost nova_compute[297021]: 2025-10-05 10:13:01.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:01 localhost nova_compute[297021]: 2025-10-05 10:13:01.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:01 localhost nova_compute[297021]: 2025-10-05 10:13:01.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:01 localhost nova_compute[297021]: 2025-10-05 10:13:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:01 localhost nova_compute[297021]: 2025-10-05 10:13:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:01 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e241 do_prune osdmap full prune enabled Oct 5 06:13:01 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e242 e242: 6 total, 6 up, 6 in Oct 5 06:13:01 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e242: 6 total, 6 up, 6 in Oct 5 06:13:02 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:02 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:02 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:02 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:13:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:13:04 localhost podman[340206]: 2025-10-05 10:13:04.676089879 +0000 UTC m=+0.085036728 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible) Oct 5 06:13:04 localhost podman[340206]: 2025-10-05 10:13:04.71404486 +0000 UTC m=+0.122991679 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 06:13:04 localhost podman[340207]: 2025-10-05 10:13:04.731590407 +0000 UTC m=+0.136360534 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible) Oct 5 06:13:04 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:13:04 localhost podman[340207]: 2025-10-05 10:13:04.749105445 +0000 UTC m=+0.153875562 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:13:04 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:13:04 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:04 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:05 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:05 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:05 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:05 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:05 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:05 localhost nova_compute[297021]: 2025-10-05 10:13:05.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:06 localhost nova_compute[297021]: 2025-10-05 10:13:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:06 localhost nova_compute[297021]: 2025-10-05 10:13:06.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:08 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:08 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:09 localhost nova_compute[297021]: 2025-10-05 10:13:09.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e242 do_prune osdmap full prune enabled Oct 5 06:13:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e243 e243: 6 total, 6 up, 6 in Oct 5 06:13:09 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e243: 6 total, 6 up, 6 in Oct 5 06:13:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e243 do_prune osdmap full prune enabled Oct 5 06:13:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e244 e244: 6 total, 6 up, 6 in Oct 5 06:13:11 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e244: 6 total, 6 up, 6 in Oct 5 06:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:13:11 localhost podman[340242]: 2025-10-05 10:13:11.670465116 +0000 UTC m=+0.077593018 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:13:11 localhost podman[340242]: 2025-10-05 10:13:11.675048679 +0000 UTC m=+0.082176571 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Oct 5 06:13:11 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:13:11 localhost nova_compute[297021]: 2025-10-05 10:13:11.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:11 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:11 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e244 do_prune osdmap full prune enabled Oct 5 06:13:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e245 e245: 6 total, 6 up, 6 in Oct 5 06:13:12 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e245: 6 total, 6 up, 6 in Oct 5 06:13:12 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:12 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:12 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:12 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:13:14 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e45: np0005471152.kbhlus(active, since 14m), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 06:13:14 localhost systemd[1]: tmp-crun.T6s05h.mount: Deactivated successfully. Oct 5 06:13:14 localhost podman[340261]: 2025-10-05 10:13:14.697617491 +0000 UTC m=+0.102091312 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 06:13:14 localhost podman[340261]: 2025-10-05 10:13:14.735063829 +0000 UTC m=+0.139537660 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_managed=true) Oct 5 06:13:14 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:13:14 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:14 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:15 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:15 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.440 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.440 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:13:16 localhost podman[340287]: 2025-10-05 10:13:16.705921027 +0000 UTC m=+0.102256776 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:16 localhost podman[340287]: 2025-10-05 10:13:16.72066429 +0000 UTC m=+0.117000039 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:16 localhost nova_compute[297021]: 2025-10-05 10:13:16.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:16 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:13:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:16 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e245 do_prune osdmap full prune enabled Oct 5 06:13:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e246 e246: 6 total, 6 up, 6 in Oct 5 06:13:17 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e246: 6 total, 6 up, 6 in Oct 5 06:13:18 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:13:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:18 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:13:18 localhost nova_compute[297021]: 2025-10-05 10:13:18.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:18 localhost nova_compute[297021]: 2025-10-05 10:13:18.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:13:18 localhost podman[340306]: 2025-10-05 10:13:18.733054324 +0000 UTC m=+0.133497768 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container) Oct 5 06:13:18 localhost podman[340306]: 2025-10-05 10:13:18.747759246 +0000 UTC m=+0.148202750 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:13:18 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:13:19 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:19 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:19 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:13:19 localhost nova_compute[297021]: 2025-10-05 10:13:19.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:13:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1710886217' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:13:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:13:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1710886217' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:13:20 localhost nova_compute[297021]: 2025-10-05 10:13:20.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:13:20.474 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:13:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:13:20.475 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:13:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:13:20.475 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:13:21 localhost podman[248506]: time="2025-10-05T10:13:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:13:21 localhost podman[248506]: @ - - [05/Oct/2025:10:13:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:13:21 localhost podman[248506]: @ - - [05/Oct/2025:10:13:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19385 "" "Go-http-client/1.1" Oct 5 06:13:21 localhost nova_compute[297021]: 2025-10-05 10:13:21.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:21 localhost nova_compute[297021]: 2025-10-05 10:13:21.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:21 localhost nova_compute[297021]: 2025-10-05 10:13:21.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:21 localhost nova_compute[297021]: 2025-10-05 10:13:21.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:21 localhost nova_compute[297021]: 2025-10-05 10:13:21.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:21 localhost nova_compute[297021]: 2025-10-05 10:13:21.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:21 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:22 localhost openstack_network_exporter[250601]: ERROR 10:13:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:13:22 localhost openstack_network_exporter[250601]: ERROR 10:13:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:22 localhost openstack_network_exporter[250601]: ERROR 10:13:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:22 localhost openstack_network_exporter[250601]: ERROR 10:13:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:13:22 localhost openstack_network_exporter[250601]: Oct 5 06:13:22 localhost openstack_network_exporter[250601]: ERROR 10:13:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:13:22 localhost openstack_network_exporter[250601]: Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.049383) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202049559, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2629, "num_deletes": 256, "total_data_size": 2467045, "memory_usage": 2531472, "flush_reason": "Manual Compaction"} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202070351, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 2406074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35623, "largest_seqno": 38250, "table_properties": {"data_size": 2394937, "index_size": 6746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 29546, "raw_average_key_size": 22, "raw_value_size": 2370244, "raw_average_value_size": 1802, "num_data_blocks": 290, "num_entries": 1315, "num_filter_entries": 1315, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659085, "oldest_key_time": 1759659085, "file_creation_time": 1759659202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 20989 microseconds, and 7065 cpu microseconds. Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.070458) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 2406074 bytes OK Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.070482) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.076688) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.076707) EVENT_LOG_v1 {"time_micros": 1759659202076701, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.076730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2454866, prev total WAL file size 2454866, number of live WAL files 2. Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.077612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(2349KB)], [66(16MB)] Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202077655, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 19598994, "oldest_snapshot_seqno": -1} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14332 keys, 18035541 bytes, temperature: kUnknown Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202217734, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 18035541, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17951579, "index_size": 47061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 384136, "raw_average_key_size": 26, "raw_value_size": 17706105, "raw_average_value_size": 1235, "num_data_blocks": 1756, "num_entries": 14332, "num_filter_entries": 14332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759659202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.218100) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 18035541 bytes Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.243270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.8 rd, 128.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 16.4 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(15.6) write-amplify(7.5) OK, records in: 14870, records dropped: 538 output_compression: NoCompression Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.243303) EVENT_LOG_v1 {"time_micros": 1759659202243290, "job": 40, "event": "compaction_finished", "compaction_time_micros": 140181, "compaction_time_cpu_micros": 52161, "output_level": 6, "num_output_files": 1, "total_output_size": 18035541, "num_input_records": 14870, "num_output_records": 14332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202243758, "job": 40, "event": "table_file_deletion", "file_number": 68} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202245941, "job": 40, "event": "table_file_deletion", "file_number": 66} Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.077515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.246057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.246066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.246071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.246075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:22.246079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:23 localhost nova_compute[297021]: 2025-10-05 10:13:23.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:23 localhost nova_compute[297021]: 2025-10-05 10:13:23.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:13:24 localhost systemd[1]: tmp-crun.HEdKTs.mount: Deactivated successfully. Oct 5 06:13:24 localhost podman[340327]: 2025-10-05 10:13:24.683765694 +0000 UTC m=+0.092378993 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:13:24 localhost podman[340327]: 2025-10-05 10:13:24.69680497 +0000 UTC m=+0.105418269 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:13:24 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:13:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:24 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:25 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:25 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:25 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:25 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:26 localhost nova_compute[297021]: 2025-10-05 10:13:26.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:26 localhost nova_compute[297021]: 2025-10-05 10:13:26.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:26 localhost nova_compute[297021]: 2025-10-05 10:13:26.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:26 localhost nova_compute[297021]: 2025-10-05 10:13:26.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:26 localhost nova_compute[297021]: 2025-10-05 10:13:26.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:26 localhost nova_compute[297021]: 2025-10-05 10:13:26.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:13:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:13:27 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:27 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.944 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.944 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.945 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:13:27 localhost nova_compute[297021]: 2025-10-05 10:13:27.945 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:13:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e246 do_prune osdmap full prune enabled Oct 5 06:13:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e247 e247: 6 total, 6 up, 6 in Oct 5 06:13:28 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e247: 6 total, 6 up, 6 in Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.330 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.348 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.348 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.348 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.369 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.369 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.369 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.369 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.370 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:13:28 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:28 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:28 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:28 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:13:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:13:28 localhost systemd[1]: tmp-crun.MQFKrl.mount: Deactivated successfully. Oct 5 06:13:28 localhost podman[340370]: 2025-10-05 10:13:28.687848889 +0000 UTC m=+0.099569645 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:13:28 localhost podman[340370]: 2025-10-05 10:13:28.699777937 +0000 UTC m=+0.111498723 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:13:28 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:13:28 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:13:28 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2967641365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.770 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.851 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:13:28 localhost nova_compute[297021]: 2025-10-05 10:13:28.852 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.053 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.054 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11046MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.055 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.055 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.149 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.150 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.150 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.196 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:13:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:13:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3070094143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.719 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.725 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.743 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.746 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:13:29 localhost nova_compute[297021]: 2025-10-05 10:13:29.746 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:13:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e247 do_prune osdmap full prune enabled Oct 5 06:13:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e248 e248: 6 total, 6 up, 6 in Oct 5 06:13:29 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e248: 6 total, 6 up, 6 in Oct 5 06:13:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e248 do_prune osdmap full prune enabled Oct 5 06:13:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e249 e249: 6 total, 6 up, 6 in Oct 5 06:13:31 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e249: 6 total, 6 up, 6 in Oct 5 06:13:31 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.742 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:31 localhost nova_compute[297021]: 2025-10-05 10:13:31.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:32 localhost sshd[340417]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:13:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:13:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:34 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e249 do_prune osdmap full prune enabled Oct 5 06:13:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:34 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:34 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e250 e250: 6 total, 6 up, 6 in Oct 5 06:13:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e250: 6 total, 6 up, 6 in Oct 5 06:13:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:13:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:13:35 localhost podman[340420]: 2025-10-05 10:13:35.682042794 +0000 UTC m=+0.088408347 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:13:35 localhost podman[340420]: 2025-10-05 10:13:35.721474105 +0000 UTC m=+0.127839618 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:13:35 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:13:35 localhost systemd[1]: tmp-crun.WytJvb.mount: Deactivated successfully. Oct 5 06:13:35 localhost podman[340421]: 2025-10-05 10:13:35.763435843 +0000 UTC m=+0.164402932 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:13:35 localhost podman[340421]: 2025-10-05 10:13:35.803956282 +0000 UTC m=+0.204923381 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:13:35 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:13:36 localhost nova_compute[297021]: 2025-10-05 10:13:36.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:36 localhost nova_compute[297021]: 2025-10-05 10:13:36.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:36 localhost nova_compute[297021]: 2025-10-05 10:13:36.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:36 localhost nova_compute[297021]: 2025-10-05 10:13:36.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:36 localhost nova_compute[297021]: 2025-10-05 10:13:36.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:36 localhost nova_compute[297021]: 2025-10-05 10:13:36.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e250 do_prune osdmap full prune enabled Oct 5 06:13:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e251 e251: 6 total, 6 up, 6 in Oct 5 06:13:37 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e251: 6 total, 6 up, 6 in Oct 5 06:13:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:37 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:38 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:38 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:38 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:38 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e251 do_prune osdmap full prune enabled Oct 5 06:13:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e252 e252: 6 total, 6 up, 6 in Oct 5 06:13:38 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e252: 6 total, 6 up, 6 in Oct 5 06:13:38 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:13:38 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:13:39 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:13:39 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:13:39 localhost ovn_controller[157794]: 2025-10-05T10:13:39Z|00410|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory Oct 5 06:13:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:13:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:41 localhost nova_compute[297021]: 2025-10-05 10:13:41.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:41 localhost nova_compute[297021]: 2025-10-05 10:13:41.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:13:41 localhost nova_compute[297021]: 2025-10-05 10:13:41.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:13:41 localhost nova_compute[297021]: 2025-10-05 10:13:41.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:41 localhost nova_compute[297021]: 2025-10-05 10:13:41.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:41 localhost nova_compute[297021]: 2025-10-05 10:13:41.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:13:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:13:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:13:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:42 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:42 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:13:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:13:42 localhost podman[340541]: 2025-10-05 10:13:42.685444825 +0000 UTC m=+0.084469383 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 06:13:42 localhost podman[340541]: 2025-10-05 10:13:42.717602782 +0000 UTC m=+0.116627370 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:13:42 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:13:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e252 do_prune osdmap full prune enabled Oct 5 06:13:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e253 e253: 6 total, 6 up, 6 in Oct 5 06:13:43 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e253: 6 total, 6 up, 6 in Oct 5 06:13:44 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:44 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:44 localhost sshd[340559]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:13:45 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:45 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:13:45 localhost podman[340562]: 2025-10-05 10:13:45.682804927 +0000 UTC m=+0.088136230 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:13:45 localhost podman[340562]: 2025-10-05 10:13:45.728850643 +0000 UTC m=+0.134181936 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:13:45 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:13:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:13:46 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3624424787' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:13:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:13:46 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3624424787' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:13:46 localhost nova_compute[297021]: 2025-10-05 10:13:46.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:46 localhost nova_compute[297021]: 2025-10-05 10:13:46.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:13:47 localhost systemd[1]: tmp-crun.aZBMcx.mount: Deactivated successfully. Oct 5 06:13:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e253 do_prune osdmap full prune enabled Oct 5 06:13:47 localhost podman[340587]: 2025-10-05 10:13:47.01404238 +0000 UTC m=+0.093169883 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 06:13:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e254 e254: 6 total, 6 up, 6 in Oct 5 06:13:47 localhost podman[340587]: 2025-10-05 10:13:47.031006002 +0000 UTC m=+0.110133505 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:13:47 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e254: 6 total, 6 up, 6 in Oct 5 06:13:47 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:13:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e254 do_prune osdmap full prune enabled Oct 5 06:13:48 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e255 e255: 6 total, 6 up, 6 in Oct 5 06:13:48 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e255: 6 total, 6 up, 6 in Oct 5 06:13:48 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:48 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:48 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:48 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:49 localhost nova_compute[297021]: 2025-10-05 10:13:49.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:13:49.529 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:13:49 localhost ovn_metadata_agent[163429]: 2025-10-05 10:13:49.531 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:13:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:13:49 localhost podman[340606]: 2025-10-05 10:13:49.676991619 +0000 UTC m=+0.086144306 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 06:13:49 localhost podman[340606]: 2025-10-05 10:13:49.694822825 +0000 UTC m=+0.103975512 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Oct 5 06:13:49 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:13:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:13:49 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2109199037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:13:49 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:13:49 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2109199037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:51 localhost podman[248506]: time="2025-10-05T10:13:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:13:51 localhost podman[248506]: @ - - [05/Oct/2025:10:13:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:13:51 localhost podman[248506]: @ - - [05/Oct/2025:10:13:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19400 "" "Go-http-client/1.1" Oct 5 06:13:51 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:51 localhost nova_compute[297021]: 2025-10-05 10:13:51.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:52 localhost openstack_network_exporter[250601]: ERROR 10:13:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:52 localhost openstack_network_exporter[250601]: ERROR 10:13:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:13:52 localhost openstack_network_exporter[250601]: ERROR 10:13:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:52 localhost openstack_network_exporter[250601]: ERROR 10:13:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:13:52 localhost openstack_network_exporter[250601]: Oct 5 06:13:52 localhost openstack_network_exporter[250601]: ERROR 10:13:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:13:52 localhost openstack_network_exporter[250601]: Oct 5 06:13:54 localhost ovn_controller[157794]: 2025-10-05T10:13:54Z|00411|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:13:54 localhost nova_compute[297021]: 2025-10-05 10:13:54.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:54 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:54 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:55 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:55 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:13:55 localhost systemd[1]: tmp-crun.cvIsL3.mount: Deactivated successfully. Oct 5 06:13:55 localhost podman[340628]: 2025-10-05 10:13:55.676068388 +0000 UTC m=+0.081169344 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:13:55 localhost podman[340628]: 2025-10-05 10:13:55.708742808 +0000 UTC m=+0.113843724 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:13:55 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:13:56 localhost sshd[340651]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:13:56 localhost nova_compute[297021]: 2025-10-05 10:13:56.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e255 do_prune osdmap full prune enabled Oct 5 06:13:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e256 e256: 6 total, 6 up, 6 in Oct 5 06:13:57 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e256: 6 total, 6 up, 6 in Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0. Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.053627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237053673, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 980, "num_deletes": 254, "total_data_size": 768416, "memory_usage": 787288, "flush_reason": "Manual Compaction"} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237059983, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 651857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38251, "largest_seqno": 39230, "table_properties": {"data_size": 647390, "index_size": 1938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12571, "raw_average_key_size": 22, "raw_value_size": 637531, "raw_average_value_size": 1120, "num_data_blocks": 84, "num_entries": 569, "num_filter_entries": 569, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659202, "oldest_key_time": 1759659202, "file_creation_time": 1759659237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 6401 microseconds, and 2715 cpu microseconds. Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.060027) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 651857 bytes OK Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.060051) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.062834) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.062852) EVENT_LOG_v1 {"time_micros": 1759659237062847, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.062871) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 763354, prev total WAL file size 763354, number of live WAL files 2. Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.063622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323536' seq:72057594037927935, type:22 .. '6D6772737461740034353037' seq:0, type:0; will stop at (end) Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(636KB)], [69(17MB)] Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237063676, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 18687398, "oldest_snapshot_seqno": -1} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14384 keys, 16670584 bytes, temperature: kUnknown Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237177498, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 16670584, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16590174, "index_size": 43381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 386054, "raw_average_key_size": 26, "raw_value_size": 16347623, "raw_average_value_size": 1136, "num_data_blocks": 1600, "num_entries": 14384, "num_filter_entries": 14384, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759659237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.177767) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 16670584 bytes Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.180519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.1 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 17.2 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(54.2) write-amplify(25.6) OK, records in: 14901, records dropped: 517 output_compression: NoCompression Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.180547) EVENT_LOG_v1 {"time_micros": 1759659237180535, "job": 42, "event": "compaction_finished", "compaction_time_micros": 113889, "compaction_time_cpu_micros": 48785, "output_level": 6, "num_output_files": 1, "total_output_size": 16670584, "num_input_records": 14901, "num_output_records": 14384, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237180768, "job": 42, "event": "table_file_deletion", "file_number": 71} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237183158, "job": 42, "event": "table_file_deletion", "file_number": 69} Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.063525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.183266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.183273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.183276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.183279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:13:57.183282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:58 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:58 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:13:58.532 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:13:59 localhost podman[340654]: 2025-10-05 10:13:59.676951919 +0000 UTC m=+0.081717459 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:13:59 localhost podman[340654]: 2025-10-05 10:13:59.685706672 +0000 UTC m=+0.090472212 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:13:59 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:14:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:14:01 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5717 writes, 39K keys, 5712 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 5717 writes, 5712 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2500 writes, 12K keys, 2496 commit groups, 1.0 writes per commit group, ingest: 11.29 MB, 0.02 MB/s#012Interval WAL: 2500 writes, 2496 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 138.4 0.34 0.13 21 0.016 0 0 0.0 0.0#012 L6 1/0 15.90 MB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 6.6 161.0 146.3 2.14 0.93 20 0.107 252K 10K 0.0 0.0#012 Sum 1/0 15.90 MB 0.0 0.3 0.0 0.3 0.4 0.1 0.0 7.6 138.8 145.2 2.48 1.06 41 0.060 252K 10K 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 12.4 146.5 146.5 1.06 0.48 18 0.059 123K 4782 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 0.0 161.0 146.3 2.14 0.93 20 0.107 252K 10K 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 139.4 0.34 0.13 20 0.017 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.046, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.35 GB write, 0.30 MB/s write, 0.34 GB read, 0.29 MB/s read, 2.5 seconds#012Interval compaction: 0.15 GB write, 0.26 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e0777f3350#2 capacity: 304.00 MB usage: 59.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000433 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4164,57.72 MB,18.9855%) FilterBlock(41,729.36 KB,0.234298%) IndexBlock(41,948.11 KB,0.304568%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 06:14:01 localhost nova_compute[297021]: 2025-10-05 10:14:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:01 localhost nova_compute[297021]: 2025-10-05 10:14:01.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:01 localhost nova_compute[297021]: 2025-10-05 10:14:01.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5031 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:14:01 localhost nova_compute[297021]: 2025-10-05 10:14:01.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:01 localhost nova_compute[297021]: 2025-10-05 10:14:01.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:01 localhost nova_compute[297021]: 2025-10-05 10:14:01.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e256 do_prune osdmap full prune enabled Oct 5 06:14:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e257 e257: 6 total, 6 up, 6 in Oct 5 06:14:05 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e257: 6 total, 6 up, 6 in Oct 5 06:14:06 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} v 0) Oct 5 06:14:06 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} : dispatch Oct 5 06:14:06 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]}]': finished Oct 5 06:14:06 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:06 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} : dispatch Oct 5 06:14:06 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} : dispatch Oct 5 06:14:06 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]}]': finished Oct 5 06:14:06 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:14:06 localhost systemd[1]: tmp-crun.c6J8zM.mount: Deactivated successfully. Oct 5 06:14:06 localhost podman[340679]: 2025-10-05 10:14:06.68228058 +0000 UTC m=+0.091084339 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:14:06 localhost podman[340679]: 2025-10-05 10:14:06.691137726 +0000 UTC m=+0.099941455 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid) Oct 5 06:14:06 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:14:06 localhost podman[340680]: 2025-10-05 10:14:06.739244787 +0000 UTC m=+0.144157151 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:14:06 localhost podman[340680]: 2025-10-05 10:14:06.753830476 +0000 UTC m=+0.158742820 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:06 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:14:07 localhost nova_compute[297021]: 2025-10-05 10:14:06.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:07 localhost nova_compute[297021]: 2025-10-05 10:14:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:07 localhost nova_compute[297021]: 2025-10-05 10:14:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:14:07 localhost nova_compute[297021]: 2025-10-05 10:14:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:07 localhost nova_compute[297021]: 2025-10-05 10:14:07.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:07 localhost nova_compute[297021]: 2025-10-05 10:14:07.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e257 do_prune osdmap full prune enabled Oct 5 06:14:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e258 e258: 6 total, 6 up, 6 in Oct 5 06:14:09 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e258: 6 total, 6 up, 6 in Oct 5 06:14:09 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} v 0) Oct 5 06:14:09 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} : dispatch Oct 5 06:14:09 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]}]': finished Oct 5 06:14:10 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:10 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} : dispatch Oct 5 06:14:10 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} : dispatch Oct 5 06:14:10 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]}]': finished Oct 5 06:14:12 localhost nova_compute[297021]: 2025-10-05 10:14:12.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:12 localhost nova_compute[297021]: 2025-10-05 10:14:12.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:12 localhost nova_compute[297021]: 2025-10-05 10:14:12.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:14:12 localhost nova_compute[297021]: 2025-10-05 10:14:12.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e258 do_prune osdmap full prune enabled Oct 5 06:14:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e259 e259: 6 total, 6 up, 6 in Oct 5 06:14:12 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e259: 6 total, 6 up, 6 in Oct 5 06:14:12 localhost nova_compute[297021]: 2025-10-05 10:14:12.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:12 localhost nova_compute[297021]: 2025-10-05 10:14:12.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Oct 5 06:14:12 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 5 06:14:13 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Oct 5 06:14:13 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:13 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 5 06:14:13 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 5 06:14:13 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Oct 5 06:14:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:14:13 localhost podman[340717]: 2025-10-05 10:14:13.666549379 +0000 UTC m=+0.078905183 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:13 localhost podman[340717]: 2025-10-05 10:14:13.67484075 +0000 UTC m=+0.087196574 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:14:13 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:14:14 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:14.022 272040 INFO neutron.agent.linux.ip_lib [None req-76e278ec-3867-4552-8ec6-67cfe144f113 - - - - - -] Device tap8f054925-41 cannot be used as it has no MAC address#033[00m Oct 5 06:14:14 localhost nova_compute[297021]: 2025-10-05 10:14:14.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:14 localhost kernel: device tap8f054925-41 entered promiscuous mode Oct 5 06:14:14 localhost nova_compute[297021]: 2025-10-05 10:14:14.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:14 localhost NetworkManager[5981]: [1759659254.0977] manager: (tap8f054925-41): new Generic device (/org/freedesktop/NetworkManager/Devices/64) Oct 5 06:14:14 localhost ovn_controller[157794]: 2025-10-05T10:14:14Z|00412|binding|INFO|Claiming lport 8f054925-4189-45c9-af74-897ccf298396 for this chassis. Oct 5 06:14:14 localhost ovn_controller[157794]: 2025-10-05T10:14:14Z|00413|binding|INFO|8f054925-4189-45c9-af74-897ccf298396: Claiming unknown Oct 5 06:14:14 localhost systemd-udevd[340746]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:14:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:14.111 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-d85da06c-02fe-465f-bdcf-b7169e8e49eb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d85da06c-02fe-465f-bdcf-b7169e8e49eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56e2267a0c9241acb2ff482e4a4692ae', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25d95b2e-6dcb-4742-856b-c29528c4d90b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8f054925-4189-45c9-af74-897ccf298396) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:14:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:14.113 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 8f054925-4189-45c9-af74-897ccf298396 in datapath d85da06c-02fe-465f-bdcf-b7169e8e49eb bound to our chassis#033[00m Oct 5 06:14:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:14.114 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d85da06c-02fe-465f-bdcf-b7169e8e49eb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:14:14 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:14.115 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0d44ca-8b2f-40d8-aa3b-adbe20fa0a23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost ovn_controller[157794]: 2025-10-05T10:14:14Z|00414|binding|INFO|Setting lport 8f054925-4189-45c9-af74-897ccf298396 ovn-installed in OVS Oct 5 06:14:14 localhost ovn_controller[157794]: 2025-10-05T10:14:14Z|00415|binding|INFO|Setting lport 8f054925-4189-45c9-af74-897ccf298396 up in Southbound Oct 5 06:14:14 localhost nova_compute[297021]: 2025-10-05 10:14:14.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost journal[237931]: ethtool ioctl error on tap8f054925-41: No such device Oct 5 06:14:14 localhost nova_compute[297021]: 2025-10-05 10:14:14.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:14 localhost nova_compute[297021]: 2025-10-05 10:14:14.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:15 localhost podman[340817]: Oct 5 06:14:15 localhost podman[340817]: 2025-10-05 10:14:15.100037808 +0000 UTC m=+0.092816606 container create 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:15 localhost systemd[1]: Started libpod-conmon-5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f.scope. Oct 5 06:14:15 localhost podman[340817]: 2025-10-05 10:14:15.051947925 +0000 UTC m=+0.044726743 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:14:15 localhost systemd[1]: Started libcrun container. Oct 5 06:14:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942489f799e49c1f315dd23b008e9665593d09db3f2359c91e6d64ced4f13c5a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:14:15 localhost podman[340817]: 2025-10-05 10:14:15.186656185 +0000 UTC m=+0.179434983 container init 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:14:15 localhost podman[340817]: 2025-10-05 10:14:15.195039318 +0000 UTC m=+0.187818116 container start 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:15 localhost dnsmasq[340836]: started, version 2.85 cachesize 150 Oct 5 06:14:15 localhost dnsmasq[340836]: DNS service limited to local subnets Oct 5 06:14:15 localhost dnsmasq[340836]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:14:15 localhost dnsmasq[340836]: warning: no upstream servers configured Oct 5 06:14:15 localhost dnsmasq-dhcp[340836]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:14:15 localhost dnsmasq[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/addn_hosts - 0 addresses Oct 5 06:14:15 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/host Oct 5 06:14:15 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/opts Oct 5 06:14:15 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:15.410 272040 INFO neutron.agent.dhcp.agent [None req-cb7e2b96-0d4d-43c1-983e-113e0aa1eff9 - - - - - -] DHCP configuration for ports {'1e5557fa-6001-46d9-99f5-bea331d49c0e'} is completed#033[00m Oct 5 06:14:16 localhost nova_compute[297021]: 2025-10-05 10:14:16.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:14:16 localhost podman[340838]: 2025-10-05 10:14:16.666447248 +0000 UTC m=+0.078079272 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:16 localhost podman[340838]: 2025-10-05 10:14:16.734778638 +0000 UTC m=+0.146410672 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:14:16 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:14:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e259 do_prune osdmap full prune enabled Oct 5 06:14:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 e260: 6 total, 6 up, 6 in Oct 5 06:14:17 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e260: 6 total, 6 up, 6 in Oct 5 06:14:17 localhost nova_compute[297021]: 2025-10-05 10:14:17.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:17 localhost nova_compute[297021]: 2025-10-05 10:14:17.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:17 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:17.239 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:14:16Z, description=, device_id=4120182b-df1e-4372-8a7a-1d1453c14eb9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3a87a261-0465-4aae-9b07-003ca616f4a0, ip_allocation=immediate, mac_address=fa:16:3e:0d:b0:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:14:12Z, description=, dns_domain=, id=d85da06c-02fe-465f-bdcf-b7169e8e49eb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-2057567887-network, port_security_enabled=True, project_id=56e2267a0c9241acb2ff482e4a4692ae, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23081, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3733, status=ACTIVE, subnets=['ade29e38-93e1-452f-bfb7-63e3bfb51d6b'], tags=[], tenant_id=56e2267a0c9241acb2ff482e4a4692ae, updated_at=2025-10-05T10:14:13Z, vlan_transparent=None, network_id=d85da06c-02fe-465f-bdcf-b7169e8e49eb, port_security_enabled=False, project_id=56e2267a0c9241acb2ff482e4a4692ae, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3745, status=DOWN, tags=[], tenant_id=56e2267a0c9241acb2ff482e4a4692ae, updated_at=2025-10-05T10:14:17Z on network d85da06c-02fe-465f-bdcf-b7169e8e49eb#033[00m Oct 5 06:14:17 localhost nova_compute[297021]: 2025-10-05 10:14:17.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:17 localhost nova_compute[297021]: 2025-10-05 10:14:17.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:14:17 localhost dnsmasq[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/addn_hosts - 1 addresses Oct 5 06:14:17 localhost podman[340877]: 2025-10-05 10:14:17.434371351 +0000 UTC m=+0.045110033 container kill 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:14:17 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/host Oct 5 06:14:17 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/opts Oct 5 06:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:14:17 localhost podman[340891]: 2025-10-05 10:14:17.540848438 +0000 UTC m=+0.076086518 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:14:17 localhost podman[340891]: 2025-10-05 10:14:17.550819523 +0000 UTC m=+0.086057603 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 06:14:17 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:14:17 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:17.722 272040 INFO neutron.agent.dhcp.agent [None req-84d367a5-1bb6-4f98-8bbb-c6247e667b3d - - - - - -] DHCP configuration for ports {'3a87a261-0465-4aae-9b07-003ca616f4a0'} is completed#033[00m Oct 5 06:14:18 localhost nova_compute[297021]: 2025-10-05 10:14:18.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:18 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:18.839 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:14:16Z, description=, device_id=4120182b-df1e-4372-8a7a-1d1453c14eb9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3a87a261-0465-4aae-9b07-003ca616f4a0, ip_allocation=immediate, mac_address=fa:16:3e:0d:b0:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:14:12Z, description=, dns_domain=, id=d85da06c-02fe-465f-bdcf-b7169e8e49eb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-2057567887-network, port_security_enabled=True, project_id=56e2267a0c9241acb2ff482e4a4692ae, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23081, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3733, status=ACTIVE, subnets=['ade29e38-93e1-452f-bfb7-63e3bfb51d6b'], tags=[], tenant_id=56e2267a0c9241acb2ff482e4a4692ae, updated_at=2025-10-05T10:14:13Z, vlan_transparent=None, network_id=d85da06c-02fe-465f-bdcf-b7169e8e49eb, port_security_enabled=False, project_id=56e2267a0c9241acb2ff482e4a4692ae, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3745, status=DOWN, tags=[], tenant_id=56e2267a0c9241acb2ff482e4a4692ae, updated_at=2025-10-05T10:14:17Z on network d85da06c-02fe-465f-bdcf-b7169e8e49eb#033[00m Oct 5 06:14:19 localhost dnsmasq[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/addn_hosts - 1 addresses Oct 5 06:14:19 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/host Oct 5 06:14:19 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/opts Oct 5 06:14:19 localhost podman[340932]: 2025-10-05 10:14:19.074785793 +0000 UTC m=+0.056476936 container kill 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:14:19 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:19.323 272040 INFO neutron.agent.dhcp.agent [None req-d917b108-1e49-4a8f-8c0d-17d1551a971e - - - - - -] DHCP configuration for ports {'3a87a261-0465-4aae-9b07-003ca616f4a0'} is completed#033[00m Oct 5 06:14:19 localhost nova_compute[297021]: 2025-10-05 10:14:19.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:20 localhost nova_compute[297021]: 2025-10-05 10:14:20.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:20 localhost nova_compute[297021]: 2025-10-05 10:14:20.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:20.475 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:14:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:20.477 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:14:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:20.477 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:14:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:14:20 localhost systemd[1]: tmp-crun.IHcrX9.mount: Deactivated successfully. Oct 5 06:14:20 localhost podman[340952]: 2025-10-05 10:14:20.68048472 +0000 UTC m=+0.091059658 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:14:20 localhost podman[340952]: 2025-10-05 10:14:20.720877536 +0000 UTC m=+0.131452434 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Oct 5 06:14:20 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:14:21 localhost podman[248506]: time="2025-10-05T10:14:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:14:21 localhost podman[248506]: @ - - [05/Oct/2025:10:14:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147502 "" "Go-http-client/1.1" Oct 5 06:14:21 localhost podman[248506]: @ - - [05/Oct/2025:10:14:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19881 "" "Go-http-client/1.1" Oct 5 06:14:22 localhost openstack_network_exporter[250601]: ERROR 10:14:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:14:22 localhost openstack_network_exporter[250601]: ERROR 10:14:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:22 localhost openstack_network_exporter[250601]: ERROR 10:14:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:22 localhost openstack_network_exporter[250601]: ERROR 10:14:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:14:22 localhost openstack_network_exporter[250601]: Oct 5 06:14:22 localhost openstack_network_exporter[250601]: ERROR 10:14:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:14:22 localhost openstack_network_exporter[250601]: Oct 5 06:14:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:22 localhost nova_compute[297021]: 2025-10-05 10:14:22.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:22 localhost nova_compute[297021]: 2025-10-05 10:14:22.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:22 localhost nova_compute[297021]: 2025-10-05 10:14:22.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:14:22 localhost nova_compute[297021]: 2025-10-05 10:14:22.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:22 localhost nova_compute[297021]: 2025-10-05 10:14:22.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:22 localhost nova_compute[297021]: 2025-10-05 10:14:22.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:23 localhost nova_compute[297021]: 2025-10-05 10:14:23.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:25 localhost nova_compute[297021]: 2025-10-05 10:14:25.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:14:26 localhost podman[340972]: 2025-10-05 10:14:26.666818368 +0000 UTC m=+0.079255713 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:14:26 localhost podman[340972]: 2025-10-05 10:14:26.683809341 +0000 UTC m=+0.096246646 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:14:26 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:14:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:27 localhost nova_compute[297021]: 2025-10-05 10:14:27.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.500 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.501 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.501 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.502 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:14:28 localhost nova_compute[297021]: 2025-10-05 10:14:28.986 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.012 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.012 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.013 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.035 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.035 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.036 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.036 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.037 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:14:29 localhost dnsmasq[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/addn_hosts - 0 addresses Oct 5 06:14:29 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/host Oct 5 06:14:29 localhost dnsmasq-dhcp[340836]: read /var/lib/neutron/dhcp/d85da06c-02fe-465f-bdcf-b7169e8e49eb/opts Oct 5 06:14:29 localhost podman[341033]: 2025-10-05 10:14:29.498754412 +0000 UTC m=+0.058288534 container kill 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:14:29 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:14:29 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4194922980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.533 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.638 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.638 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:14:29 localhost ovn_controller[157794]: 2025-10-05T10:14:29Z|00416|binding|INFO|Releasing lport 8f054925-4189-45c9-af74-897ccf298396 from this chassis (sb_readonly=0) Oct 5 06:14:29 localhost kernel: device tap8f054925-41 left promiscuous mode Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:29 localhost ovn_controller[157794]: 2025-10-05T10:14:29Z|00417|binding|INFO|Setting lport 8f054925-4189-45c9-af74-897ccf298396 down in Southbound Oct 5 06:14:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:29.709 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-d85da06c-02fe-465f-bdcf-b7169e8e49eb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d85da06c-02fe-465f-bdcf-b7169e8e49eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56e2267a0c9241acb2ff482e4a4692ae', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25d95b2e-6dcb-4742-856b-c29528c4d90b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8f054925-4189-45c9-af74-897ccf298396) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:14:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:29.711 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 8f054925-4189-45c9-af74-897ccf298396 in datapath d85da06c-02fe-465f-bdcf-b7169e8e49eb unbound from our chassis#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:29.714 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d85da06c-02fe-465f-bdcf-b7169e8e49eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:14:29 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:29.715 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[092b6279-3b44-43e9-a57b-ced81bb2c742]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.869 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.870 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11037MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.871 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.871 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.934 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.935 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:14:29 localhost nova_compute[297021]: 2025-10-05 10:14:29.935 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.008 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:14:30 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:14:30 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1645214081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.461 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.466 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.481 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.482 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.482 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:14:30 localhost podman[341080]: 2025-10-05 10:14:30.672133679 +0000 UTC m=+0.083986179 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:14:30 localhost podman[341080]: 2025-10-05 10:14:30.680462771 +0000 UTC m=+0.092315281 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:14:30 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:14:30 localhost ovn_controller[157794]: 2025-10-05T10:14:30Z|00418|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:14:30 localhost nova_compute[297021]: 2025-10-05 10:14:30.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:31 localhost podman[341121]: 2025-10-05 10:14:31.226619124 +0000 UTC m=+0.056037344 container kill 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:14:31 localhost dnsmasq[340836]: exiting on receipt of SIGTERM Oct 5 06:14:31 localhost systemd[1]: libpod-5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f.scope: Deactivated successfully. Oct 5 06:14:31 localhost podman[341135]: 2025-10-05 10:14:31.298308074 +0000 UTC m=+0.056525837 container died 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:14:31 localhost podman[341135]: 2025-10-05 10:14:31.33044688 +0000 UTC m=+0.088664633 container cleanup 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:31 localhost systemd[1]: libpod-conmon-5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f.scope: Deactivated successfully. Oct 5 06:14:31 localhost podman[341137]: 2025-10-05 10:14:31.373984061 +0000 UTC m=+0.126768209 container remove 5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d85da06c-02fe-465f-bdcf-b7169e8e49eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:14:31 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:31.403 272040 INFO neutron.agent.dhcp.agent [None req-f06c2ff2-55b3-4792-b8b2-d9942dcfd2d3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:14:31 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:14:31.404 272040 INFO neutron.agent.dhcp.agent [None req-f06c2ff2-55b3-4792-b8b2-d9942dcfd2d3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:14:31 localhost systemd[1]: var-lib-containers-storage-overlay-942489f799e49c1f315dd23b008e9665593d09db3f2359c91e6d64ced4f13c5a-merged.mount: Deactivated successfully. Oct 5 06:14:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5dcd8fcff589d032aaa42656c1cd028a20285c163c1b698facbf5126566a7c0f-userdata-shm.mount: Deactivated successfully. Oct 5 06:14:31 localhost systemd[1]: run-netns-qdhcp\x2dd85da06c\x2d02fe\x2d465f\x2dbdcf\x2db7169e8e49eb.mount: Deactivated successfully. Oct 5 06:14:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:32 localhost nova_compute[297021]: 2025-10-05 10:14:32.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:14:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:14:37 localhost nova_compute[297021]: 2025-10-05 10:14:37.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:37 localhost podman[341167]: 2025-10-05 10:14:37.656955105 +0000 UTC m=+0.067672995 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd) Oct 5 06:14:37 localhost podman[341167]: 2025-10-05 10:14:37.699894739 +0000 UTC m=+0.110612649 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 5 06:14:37 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:14:37 localhost podman[341166]: 2025-10-05 10:14:37.701748839 +0000 UTC m=+0.112365386 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:37 localhost podman[341166]: 2025-10-05 10:14:37.781207796 +0000 UTC m=+0.191824303 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 06:14:37 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.841 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.843 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.860 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.861 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f54c834d-2bd7-4b96-b439-7576814da5f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.843602', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19ed5b9e-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': 'eb3598dd1135c9577987bb5916ffb79dd222f3ce406ee3dc1fcb971f435e395b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.843602', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19ed74f8-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': 'a80abd44be78d31136403dfff2903a9d169afb71da16d0dbc72007f8b1a7223c'}]}, 'timestamp': '2025-10-05 10:14:38.861892', '_unique_id': '3a95395fa585448697db035b842cd0e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.863 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.864 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.869 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd5c27fa-6b89-48be-bd46-e4cf93974b82', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.864912', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19eeb778-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '6932fcab5ac8ae911727d9e29521b756f8fce64a8f296008a9c867c34ad018d6'}]}, 'timestamp': '2025-10-05 10:14:38.870142', '_unique_id': 'b4133131ddb84715bd4582daa42ab8be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.871 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.872 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.872 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '06dc6456-ffd9-4619-8ce3-1b9d34a94158', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.872376', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19ef22d0-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '261d271f2b5e745bfc4850109c4b2f89e7986c88ef9d7417f1771ca901b68ab8'}]}, 'timestamp': '2025-10-05 10:14:38.872913', '_unique_id': '0c1ffadce40742a0a3c67fa9d365c498'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.873 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.874 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.875 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.875 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '597328f4-093e-4f9f-863b-c6f26af615cc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.875208', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19ef91ca-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '14b2ed9137c0a536b36635991d851c05f4102768a7e6f9f022b291d76350e5bf'}]}, 'timestamp': '2025-10-05 10:14:38.875735', '_unique_id': '7efa12020d834fe1ac2179450e29d46b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.878 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.878 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '680f1550-ae99-4e18-b6d6-c715c000ec77', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.878123', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19f00150-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': 'a0772b66b469c4e08b9418b5b99827da46680732564be4ccfd26a0bea7381892'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.878123', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19f01348-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '00133269bc0d8cad3047e229b5bcebe85fead5a1137e49cee47ec5f1456b1911'}]}, 'timestamp': '2025-10-05 10:14:38.879009', '_unique_id': 'f82a09e831254f2d846f04ab12b58fee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.881 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.881 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.881 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1ba796d-d9bf-4d59-a6db-c916c338abae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.881314', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19f080a8-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '183cd2ba824c79ef4644b8fa9472e82d7fb1166891f4b693c8f6a5cad7c467e2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.881314', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19f09462-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '386cd7e012965c44b81aa2a0521b59efe6d9dde241aed40f19939daeae4e9755'}]}, 'timestamp': '2025-10-05 10:14:38.882314', '_unique_id': '5c93e594970c404e99a6222e190df90a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.883 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.884 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.892 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd834f66c-5f53-413c-bc54-4b2b3cc8df29', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.884586', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19f2159e-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.10851942, 'message_signature': '7144d82e5d7090fef15e16f23facb3270e599ab14f5908d8379df5e0a3115d07'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.884586', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19f2278c-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.10851942, 'message_signature': '32d6e43d02157a4ef542ef9c7625684abf761f87f26cd5380127c8bf4873cacf'}]}, 'timestamp': '2025-10-05 10:14:38.892647', '_unique_id': '5ca41e3a16be423e9eb83fba906255bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.893 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.894 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.895 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a1b65ae1-b4e8-46b5-9294-df4fce90546c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.894994', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f2949c-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '09e8c7ae4937c6c4f8a625c47969c0545b8826a64519bb199b5e20fd0773b775'}]}, 'timestamp': '2025-10-05 10:14:38.895486', '_unique_id': 'ca39d3c7270a4709a3d9156eda72bfb8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.896 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.897 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.897 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.898 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e3f0d2eb-2ee2-4f36-9d35-76c6cf1553b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.897856', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19f304cc-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '2bf5465e4389e36dfa01832b2f84c6d7bc989771576c3be82f2c5421006c28f5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.897856', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19f317fa-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '6cebd9104e3142d2709e2294a7022dd1b1fb73d59b1c08e6d660fbeec7b5f861'}]}, 'timestamp': '2025-10-05 10:14:38.898817', '_unique_id': 'b46b80d4c7b6479f813331914d46a177'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.899 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.900 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.901 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '850bcefa-a47a-4be4-b91d-65e630189248', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.901088', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f382a8-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '0170172dddf9d87db954e4c7a40736c8e89dad4c84f76d55e8b7f203adc7fdca'}]}, 'timestamp': '2025-10-05 10:14:38.901580', '_unique_id': 'c72efce40bcf463fae9f83d4e7043a49'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.902 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.903 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.903 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae6badcb-793e-47f8-842d-7c4d999d14f3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.903655', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f3e694-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': 'da923de8e995c7dd64e84ec68ab6a6f20c3733eb941b0234dc75266327f455d5'}]}, 'timestamp': '2025-10-05 10:14:38.904114', '_unique_id': '79054435efd04b26a762f2893e6faff2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.905 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.906 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.906 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b93dc649-d99b-4b61-9986-a6ef4d4a698e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.906340', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f4537c-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '97b3a177ec4039f52f21f542652acd4a55628cf598c87cd772e7868d974d7990'}]}, 'timestamp': '2025-10-05 10:14:38.906894', '_unique_id': 'c56e374bed164e649b9741af1ed3e495'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.907 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.908 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.909 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac4ee1e6-f2bf-4d4c-86cf-757a4fde9cbb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.908992', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f4b736-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': 'a0fc71a8b6494513451dc3ea4f59aacd93e23aecac7f5f2de3dca37c3259e4da'}]}, 'timestamp': '2025-10-05 10:14:38.909480', '_unique_id': '09ea8659e3b54b5a98b9a4f2bf44261a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.910 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.911 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.911 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.911 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.912 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a82b161-228a-47ff-b9f9-b5964c856955', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.912123', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f531ca-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': 'ba5dbd3940cabee8c036e6bedf0d4b336be60d6a7a5419f2b679b61eb5656651'}]}, 'timestamp': '2025-10-05 10:14:38.912616', '_unique_id': '5fdf1a1b97ca43478aa950dab989a20a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.913 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.914 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.914 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.915 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0aa78a88-178c-45d2-90e5-e7939206b13e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.914714', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19f596a6-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.10851942, 'message_signature': 'a86748ebc981c4338b0b10ec94b18d5d009dd70fda4c4135bbf882c6844b8a50'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.914714', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19f5ace0-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.10851942, 'message_signature': '72c2789f2d82e1780b8219d258ed02410addfa372c6a7c2fde60773a6a8f454a'}]}, 'timestamp': '2025-10-05 10:14:38.915712', '_unique_id': 'aadc6962786449bb8aa02a685583d72e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.916 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.917 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.935 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a99ce7f7-458a-4b46-bc4b-582508920470', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:14:38.917886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '19f8b458-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.158899033, 'message_signature': 'a0cf2e604701d8e0baa130e1eb4a152c1b0f38d01dd62bb468aaa361d15bc8d1'}]}, 'timestamp': '2025-10-05 10:14:38.935618', '_unique_id': 'd920e6d877a04867a4418ad410c11a8f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.936 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.937 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.938 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '92cda575-3b7e-44f4-a33e-8c2ed1325fc8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:14:38.937991', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '19f92442-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.088826986, 'message_signature': '238d898d11587b0a3bf616ff59230701854516df0f43fccdfbf2c21b221d06e5'}]}, 'timestamp': '2025-10-05 10:14:38.938489', '_unique_id': 'bfa5fe827b1e4c618529d809ae939268'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.939 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.940 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.941 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'edffeebb-5c75-4a55-aff7-c5368b76b6a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.941056', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19f99be8-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.10851942, 'message_signature': '20a48e2dacdf8880a47a95c6e76e3d052ba9f29408bdb11e3cf45b259c5d7056'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.941056', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19f9b236-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.10851942, 'message_signature': '907f57b21545ba4440fd8db4d347731630e6b31c668379057422118cb5fc84f3'}]}, 'timestamp': '2025-10-05 10:14:38.942124', '_unique_id': '090fd311fd9d4e428a71716208d6bc05'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.943 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.944 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.944 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 19380000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9975706b-9f0f-4107-ac2e-7c531653a75a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 19380000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:14:38.944908', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '19fa3314-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.158899033, 'message_signature': '17b4a2b9ee166720b40c5e8ac02bb51565a0175137f1f89a1ed7af5b1067128b'}]}, 'timestamp': '2025-10-05 10:14:38.945376', '_unique_id': '0d4de7d770914c6abb99cb68f4c36a91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.946 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.947 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43648175-4810-48a1-a2ee-cbf327250ebd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.947903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19faa736-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': 'dbaf44b592d4c1746ea3697fc82692f4ab8db5f3985da558e48149b900d038a4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.947903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19fac7ac-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '45129a9c98ab595d19cf5148802700f1deb513da19d307af26af5308b3fde2a2'}]}, 'timestamp': '2025-10-05 10:14:38.949174', '_unique_id': '0edad2f7333e485aac31100f1cdaabbe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.951 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ca36f97-3fe2-4faf-b261-cccd3ad74645', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:14:38.951762', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19fb3f66-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': 'a2406ea258d2c5cd7465c8d2744eb11f2b233d733ab4b6109600e967a5b67b61'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:14:38.951762', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19fb56f4-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12758.067512967, 'message_signature': '3a4379de1978676b977062b328d14cd3f37ae6e85cee5ffebb2688c123000cd3'}]}, 'timestamp': '2025-10-05 10:14:38.952902', '_unique_id': '78ce19ca3af54d86b77ee2df57416477'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:14:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:14:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:14:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:14:40 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:14:40 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:14:40 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:14:40 localhost nova_compute[297021]: 2025-10-05 10:14:40.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:14:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:14:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:42 localhost nova_compute[297021]: 2025-10-05 10:14:42.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:43 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:14:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:14:44 localhost podman[341288]: 2025-10-05 10:14:44.676473433 +0000 UTC m=+0.086720782 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:44 localhost podman[341288]: 2025-10-05 10:14:44.711728363 +0000 UTC m=+0.121975682 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 06:14:44 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:14:45 localhost ovn_controller[157794]: 2025-10-05T10:14:45Z|00419|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:14:45 localhost nova_compute[297021]: 2025-10-05 10:14:45.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:14:46 localhost podman[341305]: 2025-10-05 10:14:46.995729085 +0000 UTC m=+0.081510733 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:47 localhost podman[341305]: 2025-10-05 10:14:47.06087294 +0000 UTC m=+0.146654618 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:47 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:14:47 localhost nova_compute[297021]: 2025-10-05 10:14:47.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:14:47 localhost podman[341334]: 2025-10-05 10:14:47.680707478 +0000 UTC m=+0.087627017 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:47 localhost podman[341334]: 2025-10-05 10:14:47.695673467 +0000 UTC m=+0.102592976 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:47 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:14:51 localhost podman[248506]: time="2025-10-05T10:14:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:14:51 localhost podman[248506]: @ - - [05/Oct/2025:10:14:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:14:51 localhost podman[248506]: @ - - [05/Oct/2025:10:14:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19402 "" "Go-http-client/1.1" Oct 5 06:14:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:14:51 localhost systemd[1]: tmp-crun.3qaiA1.mount: Deactivated successfully. Oct 5 06:14:51 localhost podman[341353]: 2025-10-05 10:14:51.674591733 +0000 UTC m=+0.086386992 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Oct 5 06:14:51 localhost podman[341353]: 2025-10-05 10:14:51.692720687 +0000 UTC m=+0.104515936 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64) Oct 5 06:14:51 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:14:52 localhost openstack_network_exporter[250601]: ERROR 10:14:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:52 localhost openstack_network_exporter[250601]: ERROR 10:14:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:14:52 localhost openstack_network_exporter[250601]: ERROR 10:14:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:52 localhost openstack_network_exporter[250601]: ERROR 10:14:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:14:52 localhost openstack_network_exporter[250601]: Oct 5 06:14:52 localhost openstack_network_exporter[250601]: ERROR 10:14:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:14:52 localhost openstack_network_exporter[250601]: Oct 5 06:14:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0. Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.086926) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292086994, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1055, "num_deletes": 258, "total_data_size": 812113, "memory_usage": 832808, "flush_reason": "Manual Compaction"} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292097277, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 795354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39231, "largest_seqno": 40285, "table_properties": {"data_size": 790404, "index_size": 2357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12438, "raw_average_key_size": 20, "raw_value_size": 779672, "raw_average_value_size": 1288, "num_data_blocks": 103, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659237, "oldest_key_time": 1759659237, "file_creation_time": 1759659292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 10405 microseconds, and 3691 cpu microseconds. Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.097334) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 795354 bytes OK Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.097362) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.099550) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.099573) EVENT_LOG_v1 {"time_micros": 1759659292099566, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.099595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 806882, prev total WAL file size 807372, number of live WAL files 2. Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.100290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353136' seq:72057594037927935, type:22 .. '6C6F676D0034373638' seq:0, type:0; will stop at (end) Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(776KB)], [72(15MB)] Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292100357, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 17465938, "oldest_snapshot_seqno": -1} Oct 5 06:14:52 localhost nova_compute[297021]: 2025-10-05 10:14:52.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:52 localhost nova_compute[297021]: 2025-10-05 10:14:52.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14449 keys, 17272716 bytes, temperature: kUnknown Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292202902, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 17272716, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17190437, "index_size": 45058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36165, "raw_key_size": 388918, "raw_average_key_size": 26, "raw_value_size": 16945300, "raw_average_value_size": 1172, "num_data_blocks": 1665, "num_entries": 14449, "num_filter_entries": 14449, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759659292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.203169) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 17272716 bytes Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.205792) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.2 rd, 168.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.9 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(43.7) write-amplify(21.7) OK, records in: 14989, records dropped: 540 output_compression: NoCompression Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.205822) EVENT_LOG_v1 {"time_micros": 1759659292205809, "job": 44, "event": "compaction_finished", "compaction_time_micros": 102604, "compaction_time_cpu_micros": 49171, "output_level": 6, "num_output_files": 1, "total_output_size": 17272716, "num_input_records": 14989, "num_output_records": 14449, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292206072, "job": 44, "event": "table_file_deletion", "file_number": 74} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292208636, "job": 44, "event": "table_file_deletion", "file_number": 72} Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.100190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.208747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.208755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.208760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.208764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:14:52.208768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e260 do_prune osdmap full prune enabled Oct 5 06:14:55 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e261 e261: 6 total, 6 up, 6 in Oct 5 06:14:55 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e261: 6 total, 6 up, 6 in Oct 5 06:14:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:57 localhost nova_compute[297021]: 2025-10-05 10:14:57.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:57 localhost nova_compute[297021]: 2025-10-05 10:14:57.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:14:57 localhost nova_compute[297021]: 2025-10-05 10:14:57.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:14:57 localhost nova_compute[297021]: 2025-10-05 10:14:57.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:57 localhost nova_compute[297021]: 2025-10-05 10:14:57.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:57 localhost nova_compute[297021]: 2025-10-05 10:14:57.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:14:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:14:57 localhost podman[341373]: 2025-10-05 10:14:57.673046824 +0000 UTC m=+0.080028984 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:14:57 localhost podman[341373]: 2025-10-05 10:14:57.681344745 +0000 UTC m=+0.088326915 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:14:57 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:14:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:58.396 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:14:58 localhost ovn_metadata_agent[163429]: 2025-10-05 10:14:58.397 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:14:58 localhost nova_compute[297021]: 2025-10-05 10:14:58.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:15:00 localhost ceph-osd[31409]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 21K writes, 87K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s#012Cumulative WAL: 21K writes, 7649 syncs, 2.81 writes per sync, written: 0.08 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 51K keys, 12K commit groups, 1.0 writes per commit group, ingest: 46.77 MB, 0.08 MB/s#012Interval WAL: 12K writes, 5439 syncs, 2.33 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:15:01 localhost podman[341396]: 2025-10-05 10:15:01.667051821 +0000 UTC m=+0.080034513 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:15:01 localhost podman[341396]: 2025-10-05 10:15:01.682147814 +0000 UTC m=+0.095130546 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:15:01 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:15:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e261 do_prune osdmap full prune enabled Oct 5 06:15:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e262 e262: 6 total, 6 up, 6 in Oct 5 06:15:02 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e262: 6 total, 6 up, 6 in Oct 5 06:15:02 localhost nova_compute[297021]: 2025-10-05 10:15:02.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:15:05 localhost ceph-osd[32364]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 21K writes, 78K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 21K writes, 7683 syncs, 2.79 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 44K keys, 13K commit groups, 1.0 writes per commit group, ingest: 25.47 MB, 0.04 MB/s#012Interval WAL: 13K writes, 5793 syncs, 2.29 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:15:05 localhost ovn_metadata_agent[163429]: 2025-10-05 10:15:05.400 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:15:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:07 localhost nova_compute[297021]: 2025-10-05 10:15:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:07 localhost nova_compute[297021]: 2025-10-05 10:15:07.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:07 localhost nova_compute[297021]: 2025-10-05 10:15:07.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:07 localhost nova_compute[297021]: 2025-10-05 10:15:07.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:07 localhost nova_compute[297021]: 2025-10-05 10:15:07.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:07 localhost nova_compute[297021]: 2025-10-05 10:15:07.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:15:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:15:08 localhost systemd[1]: tmp-crun.hA2auK.mount: Deactivated successfully. Oct 5 06:15:08 localhost podman[341420]: 2025-10-05 10:15:08.692700123 +0000 UTC m=+0.098715731 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:15:08 localhost podman[341420]: 2025-10-05 10:15:08.727200593 +0000 UTC m=+0.133216211 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid) Oct 5 06:15:08 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:15:08 localhost podman[341421]: 2025-10-05 10:15:08.786180575 +0000 UTC m=+0.188476794 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:15:08 localhost podman[341421]: 2025-10-05 10:15:08.801838381 +0000 UTC m=+0.204134640 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:15:08 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:15:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:12 localhost nova_compute[297021]: 2025-10-05 10:15:12.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:12 localhost nova_compute[297021]: 2025-10-05 10:15:12.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:12 localhost nova_compute[297021]: 2025-10-05 10:15:12.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:12 localhost nova_compute[297021]: 2025-10-05 10:15:12.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:12 localhost nova_compute[297021]: 2025-10-05 10:15:12.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:12 localhost nova_compute[297021]: 2025-10-05 10:15:12.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:15:15 localhost podman[341460]: 2025-10-05 10:15:15.687482312 +0000 UTC m=+0.091535550 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 06:15:15 localhost podman[341460]: 2025-10-05 10:15:15.722068434 +0000 UTC m=+0.126121702 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:15:15 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:15:16 localhost ovn_controller[157794]: 2025-10-05T10:15:16Z|00420|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory Oct 5 06:15:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:17 localhost nova_compute[297021]: 2025-10-05 10:15:17.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:17 localhost nova_compute[297021]: 2025-10-05 10:15:17.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:17 localhost nova_compute[297021]: 2025-10-05 10:15:17.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:17 localhost nova_compute[297021]: 2025-10-05 10:15:17.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:17 localhost nova_compute[297021]: 2025-10-05 10:15:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:17 localhost nova_compute[297021]: 2025-10-05 10:15:17.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:15:17 localhost podman[341478]: 2025-10-05 10:15:17.671948383 +0000 UTC m=+0.081969846 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:17 localhost podman[341478]: 2025-10-05 10:15:17.713673335 +0000 UTC m=+0.123694758 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:15:17 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:15:17 localhost podman[341501]: 2025-10-05 10:15:17.840105534 +0000 UTC m=+0.092578738 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:15:17 localhost podman[341501]: 2025-10-05 10:15:17.854892048 +0000 UTC m=+0.107365212 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 06:15:17 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:15:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e262 do_prune osdmap full prune enabled Oct 5 06:15:20 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e263 e263: 6 total, 6 up, 6 in Oct 5 06:15:20 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e263: 6 total, 6 up, 6 in Oct 5 06:15:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:15:20.476 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:15:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:15:20.477 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:15:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:15:20.478 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:15:20 localhost nova_compute[297021]: 2025-10-05 10:15:20.891 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:20 localhost nova_compute[297021]: 2025-10-05 10:15:20.891 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:20 localhost nova_compute[297021]: 2025-10-05 10:15:20.892 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:20 localhost nova_compute[297021]: 2025-10-05 10:15:20.892 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:20 localhost nova_compute[297021]: 2025-10-05 10:15:20.893 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:15:21 localhost podman[248506]: time="2025-10-05T10:15:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:15:21 localhost podman[248506]: @ - - [05/Oct/2025:10:15:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:15:21 localhost podman[248506]: @ - - [05/Oct/2025:10:15:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19394 "" "Go-http-client/1.1" Oct 5 06:15:22 localhost openstack_network_exporter[250601]: ERROR 10:15:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:22 localhost openstack_network_exporter[250601]: ERROR 10:15:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:22 localhost openstack_network_exporter[250601]: ERROR 10:15:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:15:22 localhost openstack_network_exporter[250601]: ERROR 10:15:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:15:22 localhost openstack_network_exporter[250601]: Oct 5 06:15:22 localhost openstack_network_exporter[250601]: ERROR 10:15:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:15:22 localhost openstack_network_exporter[250601]: Oct 5 06:15:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:22 localhost nova_compute[297021]: 2025-10-05 10:15:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:22 localhost nova_compute[297021]: 2025-10-05 10:15:22.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:22 localhost nova_compute[297021]: 2025-10-05 10:15:22.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:15:22 localhost systemd[1]: tmp-crun.QkL382.mount: Deactivated successfully. Oct 5 06:15:22 localhost podman[341518]: 2025-10-05 10:15:22.669980343 +0000 UTC m=+0.081189304 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:15:22 localhost podman[341518]: 2025-10-05 10:15:22.684725796 +0000 UTC m=+0.095934767 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., managed_by=edpm_ansible) Oct 5 06:15:22 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:15:24 localhost nova_compute[297021]: 2025-10-05 10:15:24.420 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:26 localhost nova_compute[297021]: 2025-10-05 10:15:26.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e263 do_prune osdmap full prune enabled Oct 5 06:15:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e264 e264: 6 total, 6 up, 6 in Oct 5 06:15:27 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e264: 6 total, 6 up, 6 in Oct 5 06:15:27 localhost nova_compute[297021]: 2025-10-05 10:15:27.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:27 localhost nova_compute[297021]: 2025-10-05 10:15:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:27 localhost nova_compute[297021]: 2025-10-05 10:15:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:27 localhost nova_compute[297021]: 2025-10-05 10:15:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:27 localhost nova_compute[297021]: 2025-10-05 10:15:27.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:27 localhost nova_compute[297021]: 2025-10-05 10:15:27.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:15:27 localhost podman[341541]: 2025-10-05 10:15:27.927609555 +0000 UTC m=+0.082644384 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:15:27 localhost podman[341541]: 2025-10-05 10:15:27.939781379 +0000 UTC m=+0.094816198 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:15:27 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:15:30 localhost nova_compute[297021]: 2025-10-05 10:15:30.416 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:30 localhost nova_compute[297021]: 2025-10-05 10:15:30.451 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:30 localhost nova_compute[297021]: 2025-10-05 10:15:30.451 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:15:30 localhost nova_compute[297021]: 2025-10-05 10:15:30.452 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.063 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.064 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.064 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.065 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.586 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.605 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.606 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.606 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.629 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.630 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.630 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.631 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:15:31 localhost nova_compute[297021]: 2025-10-05 10:15:31.631 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:15:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:15:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2672804435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.093 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:15:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.158 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.158 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.388 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.390 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11049MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.391 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.391 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.469 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.470 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.470 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.512 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:15:32 localhost podman[341587]: 2025-10-05 10:15:32.665574657 +0000 UTC m=+0.073450358 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:15:32 localhost podman[341587]: 2025-10-05 10:15:32.701048902 +0000 UTC m=+0.108924633 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:15:32 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:15:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:15:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/291519695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.967 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.972 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.993 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.994 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:15:32 localhost nova_compute[297021]: 2025-10-05 10:15:32.994 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:15:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e264 do_prune osdmap full prune enabled Oct 5 06:15:34 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e265 e265: 6 total, 6 up, 6 in Oct 5 06:15:34 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e265: 6 total, 6 up, 6 in Oct 5 06:15:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:37 localhost nova_compute[297021]: 2025-10-05 10:15:37.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:37 localhost nova_compute[297021]: 2025-10-05 10:15:37.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:37 localhost nova_compute[297021]: 2025-10-05 10:15:37.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:37 localhost nova_compute[297021]: 2025-10-05 10:15:37.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:37 localhost nova_compute[297021]: 2025-10-05 10:15:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:37 localhost nova_compute[297021]: 2025-10-05 10:15:37.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:15:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:15:39 localhost systemd[1]: tmp-crun.2tBMSq.mount: Deactivated successfully. Oct 5 06:15:39 localhost podman[341630]: 2025-10-05 10:15:39.682209473 +0000 UTC m=+0.089817955 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:39 localhost podman[341630]: 2025-10-05 10:15:39.692985459 +0000 UTC m=+0.100593971 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:15:39 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:15:39 localhost podman[341631]: 2025-10-05 10:15:39.781032506 +0000 UTC m=+0.185426822 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:15:39 localhost podman[341631]: 2025-10-05 10:15:39.791371651 +0000 UTC m=+0.195765927 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Oct 5 06:15:39 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:15:41 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:15:41 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:15:41 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:15:41 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0. Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.451962) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341452013, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 934, "num_deletes": 253, "total_data_size": 687882, "memory_usage": 705336, "flush_reason": "Manual Compaction"} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341457646, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 672642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40286, "largest_seqno": 41219, "table_properties": {"data_size": 668320, "index_size": 1921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11346, "raw_average_key_size": 21, "raw_value_size": 659003, "raw_average_value_size": 1227, "num_data_blocks": 84, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659292, "oldest_key_time": 1759659292, "file_creation_time": 1759659341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 5727 microseconds, and 2929 cpu microseconds. Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.457691) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 672642 bytes OK Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.457710) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.459525) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.459546) EVENT_LOG_v1 {"time_micros": 1759659341459539, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.459566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 683203, prev total WAL file size 683203, number of live WAL files 2. Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.460148) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(656KB)], [75(16MB)] Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341460190, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 17945358, "oldest_snapshot_seqno": -1} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 14461 keys, 16537441 bytes, temperature: kUnknown Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341559481, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 16537441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16456141, "index_size": 44079, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36165, "raw_key_size": 390047, "raw_average_key_size": 26, "raw_value_size": 16211882, "raw_average_value_size": 1121, "num_data_blocks": 1618, "num_entries": 14461, "num_filter_entries": 14461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658041, "oldest_key_time": 0, "file_creation_time": 1759659341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e13a6ee5-354d-4ab5-a9b4-3ab9ab23ea76", "db_session_id": "J2NOOSTRKLEUC7SFP9C2", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.559769) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 16537441 bytes Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.561431) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.6 rd, 166.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 16.5 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(51.3) write-amplify(24.6) OK, records in: 14986, records dropped: 525 output_compression: NoCompression Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.561458) EVENT_LOG_v1 {"time_micros": 1759659341561447, "job": 46, "event": "compaction_finished", "compaction_time_micros": 99390, "compaction_time_cpu_micros": 44112, "output_level": 6, "num_output_files": 1, "total_output_size": 16537441, "num_input_records": 14986, "num_output_records": 14461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341561685, "job": 46, "event": "table_file_deletion", "file_number": 77} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471150/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341564075, "job": 46, "event": "table_file_deletion", "file_number": 75} Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.460079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[308154]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:15:42 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:15:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e265 do_prune osdmap full prune enabled Oct 5 06:15:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e266 e266: 6 total, 6 up, 6 in Oct 5 06:15:42 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e266: 6 total, 6 up, 6 in Oct 5 06:15:42 localhost nova_compute[297021]: 2025-10-05 10:15:42.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:42 localhost nova_compute[297021]: 2025-10-05 10:15:42.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:42 localhost nova_compute[297021]: 2025-10-05 10:15:42.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:42 localhost nova_compute[297021]: 2025-10-05 10:15:42.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:42 localhost nova_compute[297021]: 2025-10-05 10:15:42.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:42 localhost nova_compute[297021]: 2025-10-05 10:15:42.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:43 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:15:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e266 do_prune osdmap full prune enabled Oct 5 06:15:43 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e267 e267: 6 total, 6 up, 6 in Oct 5 06:15:43 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e267: 6 total, 6 up, 6 in Oct 5 06:15:43 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e46: np0005471152.kbhlus(active, since 16m), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 06:15:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:15:46 localhost podman[341755]: 2025-10-05 10:15:46.696983027 +0000 UTC m=+0.098519967 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:15:46 localhost podman[341755]: 2025-10-05 10:15:46.706802698 +0000 UTC m=+0.108339648 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:15:46 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:15:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:47 localhost nova_compute[297021]: 2025-10-05 10:15:47.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:47 localhost nova_compute[297021]: 2025-10-05 10:15:47.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:47 localhost nova_compute[297021]: 2025-10-05 10:15:47.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:47 localhost nova_compute[297021]: 2025-10-05 10:15:47.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:47 localhost nova_compute[297021]: 2025-10-05 10:15:47.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:47 localhost nova_compute[297021]: 2025-10-05 10:15:47.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:15:48 localhost systemd[1]: tmp-crun.mxxR7Q.mount: Deactivated successfully. Oct 5 06:15:48 localhost podman[341772]: 2025-10-05 10:15:48.696601891 +0000 UTC m=+0.099212845 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:48 localhost podman[341773]: 2025-10-05 10:15:48.767900091 +0000 UTC m=+0.168638304 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:15:48 localhost podman[341773]: 2025-10-05 10:15:48.776069529 +0000 UTC m=+0.176807712 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:15:48 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:15:48 localhost podman[341772]: 2025-10-05 10:15:48.790996896 +0000 UTC m=+0.193607840 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:15:48 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:15:51 localhost podman[248506]: time="2025-10-05T10:15:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:15:51 localhost podman[248506]: @ - - [05/Oct/2025:10:15:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:15:51 localhost podman[248506]: @ - - [05/Oct/2025:10:15:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19390 "" "Go-http-client/1.1" Oct 5 06:15:52 localhost openstack_network_exporter[250601]: ERROR 10:15:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:15:52 localhost openstack_network_exporter[250601]: ERROR 10:15:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:52 localhost openstack_network_exporter[250601]: ERROR 10:15:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:52 localhost openstack_network_exporter[250601]: ERROR 10:15:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:15:52 localhost openstack_network_exporter[250601]: Oct 5 06:15:52 localhost openstack_network_exporter[250601]: ERROR 10:15:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:15:52 localhost openstack_network_exporter[250601]: Oct 5 06:15:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e267 do_prune osdmap full prune enabled Oct 5 06:15:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e268 e268: 6 total, 6 up, 6 in Oct 5 06:15:52 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e268: 6 total, 6 up, 6 in Oct 5 06:15:52 localhost nova_compute[297021]: 2025-10-05 10:15:52.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:52 localhost nova_compute[297021]: 2025-10-05 10:15:52.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:52 localhost nova_compute[297021]: 2025-10-05 10:15:52.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:52 localhost nova_compute[297021]: 2025-10-05 10:15:52.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:52 localhost nova_compute[297021]: 2025-10-05 10:15:52.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:52 localhost nova_compute[297021]: 2025-10-05 10:15:52.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:15:53 localhost systemd[1]: tmp-crun.NXesoV.mount: Deactivated successfully. Oct 5 06:15:53 localhost podman[341815]: 2025-10-05 10:15:53.686504347 +0000 UTC m=+0.092437734 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container) Oct 5 06:15:53 localhost podman[341815]: 2025-10-05 10:15:53.699470882 +0000 UTC m=+0.105404319 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Oct 5 06:15:53 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:15:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:57 localhost nova_compute[297021]: 2025-10-05 10:15:57.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:57 localhost nova_compute[297021]: 2025-10-05 10:15:57.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:15:57 localhost nova_compute[297021]: 2025-10-05 10:15:57.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:15:57 localhost nova_compute[297021]: 2025-10-05 10:15:57.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:57 localhost nova_compute[297021]: 2025-10-05 10:15:57.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:57 localhost nova_compute[297021]: 2025-10-05 10:15:57.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:15:58 localhost podman[341835]: 2025-10-05 10:15:58.665570933 +0000 UTC m=+0.075807921 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:15:58 localhost podman[341835]: 2025-10-05 10:15:58.671872531 +0000 UTC m=+0.082109539 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:15:58 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:16:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:16:02 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:02.742 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:16:02 localhost nova_compute[297021]: 2025-10-05 10:16:02.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:02 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:02.744 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:16:03 localhost podman[341858]: 2025-10-05 10:16:03.650790583 +0000 UTC m=+0.061495450 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:16:03 localhost podman[341858]: 2025-10-05 10:16:03.657600624 +0000 UTC m=+0.068305481 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:16:03 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:16:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e268 do_prune osdmap full prune enabled Oct 5 06:16:05 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e269 e269: 6 total, 6 up, 6 in Oct 5 06:16:05 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e269: 6 total, 6 up, 6 in Oct 5 06:16:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:07 localhost nova_compute[297021]: 2025-10-05 10:16:07.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:09 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:09.746 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:16:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:16:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:16:10 localhost podman[341882]: 2025-10-05 10:16:10.686564364 +0000 UTC m=+0.087993695 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:16:10 localhost podman[341882]: 2025-10-05 10:16:10.726914009 +0000 UTC m=+0.128343340 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:16:10 localhost podman[341883]: 2025-10-05 10:16:10.744660583 +0000 UTC m=+0.139792267 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 06:16:10 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:16:10 localhost podman[341883]: 2025-10-05 10:16:10.760937356 +0000 UTC m=+0.156069060 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=multipathd) Oct 5 06:16:10 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:16:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e269 do_prune osdmap full prune enabled Oct 5 06:16:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e270 e270: 6 total, 6 up, 6 in Oct 5 06:16:12 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e270: 6 total, 6 up, 6 in Oct 5 06:16:12 localhost nova_compute[297021]: 2025-10-05 10:16:12.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:12 localhost nova_compute[297021]: 2025-10-05 10:16:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:12 localhost nova_compute[297021]: 2025-10-05 10:16:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:16:12 localhost nova_compute[297021]: 2025-10-05 10:16:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:16:12 localhost nova_compute[297021]: 2025-10-05 10:16:12.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:12 localhost nova_compute[297021]: 2025-10-05 10:16:12.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:16:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e270 do_prune osdmap full prune enabled Oct 5 06:16:13 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e271 e271: 6 total, 6 up, 6 in Oct 5 06:16:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e271: 6 total, 6 up, 6 in Oct 5 06:16:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e271 do_prune osdmap full prune enabled Oct 5 06:16:15 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e272 e272: 6 total, 6 up, 6 in Oct 5 06:16:15 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e272: 6 total, 6 up, 6 in Oct 5 06:16:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:16:17 localhost podman[341921]: 2025-10-05 10:16:17.003695007 +0000 UTC m=+0.085374716 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001) Oct 5 06:16:17 localhost podman[341921]: 2025-10-05 10:16:17.039744227 +0000 UTC m=+0.121423856 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:16:17 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:16:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:17 localhost nova_compute[297021]: 2025-10-05 10:16:17.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:16:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:16:19 localhost podman[341940]: 2025-10-05 10:16:19.679490908 +0000 UTC m=+0.085160181 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:16:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:16:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1130022335' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:16:19 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:16:19 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1130022335' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:16:19 localhost podman[341940]: 2025-10-05 10:16:19.725907225 +0000 UTC m=+0.131576518 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:16:19 localhost systemd[1]: tmp-crun.aliiOF.mount: Deactivated successfully. Oct 5 06:16:19 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:16:19 localhost podman[341941]: 2025-10-05 10:16:19.758439611 +0000 UTC m=+0.156305065 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:16:19 localhost podman[341941]: 2025-10-05 10:16:19.774843008 +0000 UTC m=+0.172708462 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm) Oct 5 06:16:19 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:16:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:20.478 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:16:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:20.479 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:16:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:20.479 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:16:21 localhost podman[248506]: time="2025-10-05T10:16:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:16:21 localhost podman[248506]: @ - - [05/Oct/2025:10:16:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:16:21 localhost podman[248506]: @ - - [05/Oct/2025:10:16:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19392 "" "Go-http-client/1.1" Oct 5 06:16:21 localhost nova_compute[297021]: 2025-10-05 10:16:21.809 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:21 localhost nova_compute[297021]: 2025-10-05 10:16:21.810 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:21 localhost nova_compute[297021]: 2025-10-05 10:16:21.810 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:21 localhost nova_compute[297021]: 2025-10-05 10:16:21.810 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:16:22 localhost openstack_network_exporter[250601]: ERROR 10:16:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:16:22 localhost openstack_network_exporter[250601]: Oct 5 06:16:22 localhost openstack_network_exporter[250601]: ERROR 10:16:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:16:22 localhost openstack_network_exporter[250601]: ERROR 10:16:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:22 localhost openstack_network_exporter[250601]: ERROR 10:16:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:22 localhost openstack_network_exporter[250601]: ERROR 10:16:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:16:22 localhost openstack_network_exporter[250601]: Oct 5 06:16:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e272 do_prune osdmap full prune enabled Oct 5 06:16:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e273 e273: 6 total, 6 up, 6 in Oct 5 06:16:22 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e273: 6 total, 6 up, 6 in Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:22 localhost nova_compute[297021]: 2025-10-05 10:16:22.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:16:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e273 do_prune osdmap full prune enabled Oct 5 06:16:24 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e274 e274: 6 total, 6 up, 6 in Oct 5 06:16:24 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e274: 6 total, 6 up, 6 in Oct 5 06:16:24 localhost nova_compute[297021]: 2025-10-05 10:16:24.417 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:16:24 localhost podman[341985]: 2025-10-05 10:16:24.669312182 +0000 UTC m=+0.080208798 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9) Oct 5 06:16:24 localhost podman[341985]: 2025-10-05 10:16:24.685826962 +0000 UTC m=+0.096723628 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:16:24 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:16:26 localhost nova_compute[297021]: 2025-10-05 10:16:26.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:26 localhost nova_compute[297021]: 2025-10-05 10:16:26.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:27 localhost nova_compute[297021]: 2025-10-05 10:16:27.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:27 localhost nova_compute[297021]: 2025-10-05 10:16:27.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:27 localhost nova_compute[297021]: 2025-10-05 10:16:27.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:16:29 localhost podman[342004]: 2025-10-05 10:16:29.669875163 +0000 UTC m=+0.078441332 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:16:29 localhost podman[342004]: 2025-10-05 10:16:29.680693361 +0000 UTC m=+0.089259540 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:16:29 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:16:29 localhost nova_compute[297021]: 2025-10-05 10:16:29.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.421 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.532 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.533 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.533 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.533 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:16:31 localhost nova_compute[297021]: 2025-10-05 10:16:31.987 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.005 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.005 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.006 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.027 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.028 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.029 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.029 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.030 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:16:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e274 do_prune osdmap full prune enabled Oct 5 06:16:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e275 e275: 6 total, 6 up, 6 in Oct 5 06:16:32 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e275: 6 total, 6 up, 6 in Oct 5 06:16:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:16:32 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/792803996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.557 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.626 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.627 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.904 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.906 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11016MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.907 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:16:32 localhost nova_compute[297021]: 2025-10-05 10:16:32.908 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.073 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.075 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.076 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.115 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:16:33 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:33.565 272040 INFO neutron.agent.linux.ip_lib [None req-0754444b-4237-480d-a04c-1ddc163a2394 - - - - - -] Device tap769fc590-a8 cannot be used as it has no MAC address#033[00m Oct 5 06:16:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:16:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1081171290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.590 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:16:33 localhost kernel: device tap769fc590-a8 entered promiscuous mode Oct 5 06:16:33 localhost NetworkManager[5981]: [1759659393.5957] manager: (tap769fc590-a8): new Generic device (/org/freedesktop/NetworkManager/Devices/65) Oct 5 06:16:33 localhost ovn_controller[157794]: 2025-10-05T10:16:33Z|00421|binding|INFO|Claiming lport 769fc590-a8be-41e6-bffd-918cbd7af764 for this chassis. Oct 5 06:16:33 localhost ovn_controller[157794]: 2025-10-05T10:16:33Z|00422|binding|INFO|769fc590-a8be-41e6-bffd-918cbd7af764: Claiming unknown Oct 5 06:16:33 localhost systemd-udevd[342081]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.599 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost ovn_controller[157794]: 2025-10-05T10:16:33Z|00423|binding|INFO|Setting lport 769fc590-a8be-41e6-bffd-918cbd7af764 ovn-installed in OVS Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost ovn_controller[157794]: 2025-10-05T10:16:33Z|00424|binding|INFO|Setting lport 769fc590-a8be-41e6-bffd-918cbd7af764 up in Southbound Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.623 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.626 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.626 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:16:33 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:33.626 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-734787ee-36a6-4ce9-8b17-c68682929480', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-734787ee-36a6-4ce9-8b17-c68682929480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4434dbcfa4564256a8905a21cefe3cee', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=033c0a4f-0b91-42ee-ae39-aca6a965e265, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=769fc590-a8be-41e6-bffd-918cbd7af764) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:16:33 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:33.629 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 769fc590-a8be-41e6-bffd-918cbd7af764 in datapath 734787ee-36a6-4ce9-8b17-c68682929480 bound to our chassis#033[00m Oct 5 06:16:33 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:33.631 163434 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 734787ee-36a6-4ce9-8b17-c68682929480 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:33.637 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[b76d93dd-141d-4c74-9c73-d00fae1fe2e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost journal[237931]: ethtool ioctl error on tap769fc590-a8: No such device Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost nova_compute[297021]: 2025-10-05 10:16:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:16:34 localhost podman[342153]: Oct 5 06:16:34 localhost podman[342153]: 2025-10-05 10:16:34.613323712 +0000 UTC m=+0.103216062 container create d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 06:16:34 localhost podman[342153]: 2025-10-05 10:16:34.563950926 +0000 UTC m=+0.053843316 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:16:34 localhost systemd[1]: Started libpod-conmon-d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51.scope. Oct 5 06:16:34 localhost systemd[1]: Started libcrun container. Oct 5 06:16:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb213e9bdddc73147d6e86e5ce92ca89050efaa2b9dd00a881077c78f9dbbba4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:34 localhost podman[342165]: 2025-10-05 10:16:34.709787043 +0000 UTC m=+0.110854246 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:16:34 localhost podman[342153]: 2025-10-05 10:16:34.719564423 +0000 UTC m=+0.209456783 container init d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:16:34 localhost podman[342153]: 2025-10-05 10:16:34.731927582 +0000 UTC m=+0.221819962 container start d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:16:34 localhost dnsmasq[342193]: started, version 2.85 cachesize 150 Oct 5 06:16:34 localhost dnsmasq[342193]: DNS service limited to local subnets Oct 5 06:16:34 localhost dnsmasq[342193]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:16:34 localhost dnsmasq[342193]: warning: no upstream servers configured Oct 5 06:16:34 localhost dnsmasq-dhcp[342193]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:16:34 localhost dnsmasq[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/addn_hosts - 0 addresses Oct 5 06:16:34 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/host Oct 5 06:16:34 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/opts Oct 5 06:16:34 localhost podman[342165]: 2025-10-05 10:16:34.770183271 +0000 UTC m=+0.171250534 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:16:34 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:16:34 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:34.913 272040 INFO neutron.agent.dhcp.agent [None req-d944cebb-71ad-4524-8172-e1c1b8ee8487 - - - - - -] DHCP configuration for ports {'7357dc4d-e872-49d4-8e31-5781af66ff31'} is completed#033[00m Oct 5 06:16:35 localhost nova_compute[297021]: 2025-10-05 10:16:35.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:36.416 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:16:36Z, description=, device_id=a8d51cec-3c9e-4c31-afd9-1a7e902d28ae, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eb4cf428-6f46-4aa4-a98a-19e4ad499ef8, ip_allocation=immediate, mac_address=fa:16:3e:5e:c5:8e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:16:31Z, description=, dns_domain=, id=734787ee-36a6-4ce9-8b17-c68682929480, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-90654340-network, port_security_enabled=True, project_id=4434dbcfa4564256a8905a21cefe3cee, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29070, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3959, status=ACTIVE, subnets=['786363b7-a1f8-4386-adb3-0c26c1c2b9a2'], tags=[], tenant_id=4434dbcfa4564256a8905a21cefe3cee, updated_at=2025-10-05T10:16:32Z, vlan_transparent=None, network_id=734787ee-36a6-4ce9-8b17-c68682929480, port_security_enabled=False, project_id=4434dbcfa4564256a8905a21cefe3cee, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3966, status=DOWN, tags=[], tenant_id=4434dbcfa4564256a8905a21cefe3cee, updated_at=2025-10-05T10:16:36Z on network 734787ee-36a6-4ce9-8b17-c68682929480#033[00m Oct 5 06:16:36 localhost podman[342212]: 2025-10-05 10:16:36.658290754 +0000 UTC m=+0.065348792 container kill d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:16:36 localhost systemd[1]: tmp-crun.lk21A2.mount: Deactivated successfully. Oct 5 06:16:36 localhost dnsmasq[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/addn_hosts - 1 addresses Oct 5 06:16:36 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/host Oct 5 06:16:36 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/opts Oct 5 06:16:36 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:36.864 272040 INFO neutron.agent.dhcp.agent [None req-3bc98b08-719a-4074-9b66-197bba196be0 - - - - - -] DHCP configuration for ports {'eb4cf428-6f46-4aa4-a98a-19e4ad499ef8'} is completed#033[00m Oct 5 06:16:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:37.430 272040 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:16:36Z, description=, device_id=a8d51cec-3c9e-4c31-afd9-1a7e902d28ae, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eb4cf428-6f46-4aa4-a98a-19e4ad499ef8, ip_allocation=immediate, mac_address=fa:16:3e:5e:c5:8e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:16:31Z, description=, dns_domain=, id=734787ee-36a6-4ce9-8b17-c68682929480, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-90654340-network, port_security_enabled=True, project_id=4434dbcfa4564256a8905a21cefe3cee, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29070, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3959, status=ACTIVE, subnets=['786363b7-a1f8-4386-adb3-0c26c1c2b9a2'], tags=[], tenant_id=4434dbcfa4564256a8905a21cefe3cee, updated_at=2025-10-05T10:16:32Z, vlan_transparent=None, network_id=734787ee-36a6-4ce9-8b17-c68682929480, port_security_enabled=False, project_id=4434dbcfa4564256a8905a21cefe3cee, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3966, status=DOWN, tags=[], tenant_id=4434dbcfa4564256a8905a21cefe3cee, updated_at=2025-10-05T10:16:36Z on network 734787ee-36a6-4ce9-8b17-c68682929480#033[00m Oct 5 06:16:37 localhost podman[342252]: 2025-10-05 10:16:37.652738473 +0000 UTC m=+0.060643917 container kill d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:16:37 localhost dnsmasq[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/addn_hosts - 1 addresses Oct 5 06:16:37 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/host Oct 5 06:16:37 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/opts Oct 5 06:16:37 localhost nova_compute[297021]: 2025-10-05 10:16:37.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:37 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:37.926 272040 INFO neutron.agent.dhcp.agent [None req-7b581174-bf45-4fcf-8065-3c52314b20da - - - - - -] DHCP configuration for ports {'eb4cf428-6f46-4aa4-a98a-19e4ad499ef8'} is completed#033[00m Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.841 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'name': 'test', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005471150.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8b36437b65444bcdac75beef77b6981e', 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'hostId': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.842 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.842 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.862 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 50 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.863 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b14c6731-1d7b-4432-accc-70c791d6b7dd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 50, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.842697', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '617432b2-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '5d78de7125748eba40540b1b4d8ff82ca67fd4b77d125864e7aa5e93bf343711'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.842697', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '617444e6-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': 'b1f9478d6a7fb8e7b1a15e751e2d71763b79a204f0e10d78c70ad543ef42e9e2'}]}, 'timestamp': '2025-10-05 10:16:38.863857', '_unique_id': '62d843502237433794fbbc20b7062dc3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.865 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.866 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.870 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4e8ccc5e-639e-4da8-9cb1-16de08f72b23', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.867005', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '61756e20-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': 'f28c2f54234c7d0a70ff1fc081681777a2cd7619aef6faf7bc9dbdc9c64ee081'}]}, 'timestamp': '2025-10-05 10:16:38.871235', '_unique_id': 'a6c13e45480a4ecca4cd26b260dd6f8e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.872 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.873 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.874 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a5060bb-3355-44a5-8884-8af344fb8f88', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 446464, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.874030', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6175f0b6-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '8bd8eca9b7d9d94d64273d5439e30d28aa7892616055cae15e69bad512cbc1e2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.874030', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '617605b0-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '521bee775707de417b0cfc759d656e901b7feb2a45bec7f84e466fc5e2ef2790'}]}, 'timestamp': '2025-10-05 10:16:38.875080', '_unique_id': '566b4a6112104aa59bfbb688ffabfb52'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.876 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.877 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.877 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '182c95a0-e29e-41aa-b8c2-b110f019417e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.877827', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6176858a-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': '565735a16cb2e0f8d5d1f79b280464058331851ed28dfb1cf5e592a024b4e6a4'}]}, 'timestamp': '2025-10-05 10:16:38.878458', '_unique_id': '15574d79f8be4522a2aed6a0a370741d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.879 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.880 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.881 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.bytes volume: 8100 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5ecf59ba-f205-484a-b0f4-a5dd27dd7ce3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8100, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.881072', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6177038e-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': 'caae85b8eb41087847c9abb207cc42e9668c4835739dab19a4c46cab083c63a1'}]}, 'timestamp': '2025-10-05 10:16:38.881639', '_unique_id': '2e3f8ecefd5d471f9b133846e999e5d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.882 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.884 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.884 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.884 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ef4104a-c011-42c5-bc7e-74afc2654f44', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.884260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6177837c-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': 'ecabc4636d76687ab8dfde5f23e073745129400bd6b12d9a6202a771d076bae9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.884260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '61779664-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': 'f64af10eb0750c4f49ddfeb8dae7ac4f36aeee9545ae515d42d7f53b78ec66e9'}]}, 'timestamp': '2025-10-05 10:16:38.885337', '_unique_id': '85cb4c3600cf4dc2b3be2274733d309d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.886 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.887 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.888 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets volume: 118 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '34f9206a-02f1-4326-9c6f-6dc51d03e46b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 118, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.888273', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '6178234a-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': '7728807bf58b909d24c93dd98373c95370ec44ba2977fd9a949ad80b8746c48b'}]}, 'timestamp': '2025-10-05 10:16:38.889074', '_unique_id': 'e0e5d7c7d15a4a768ce4a3c9bc0115ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.890 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.891 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.903 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.904 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cd9edb1d-a000-4794-8ff9-b95daf9833c2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.892006', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '617a7bea-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.115951779, 'message_signature': '568c03ff5a0bb075e8f8f584387c1241a3e606d0a90aafe6b2b17bb21eca64ea'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.892006', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '617a972e-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.115951779, 'message_signature': '36fcb035c9db1daf5ddfbcffaeebe5bcfe7c0415f462bc9f156e9e54f5c16e0f'}]}, 'timestamp': '2025-10-05 10:16:38.905026', '_unique_id': 'f50aa79c79694a7f871a6713ff05b80f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.906 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.907 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.907 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets volume: 78 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dc9240e6-9bf1-4e92-973b-4c6abbce872a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 78, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.907283', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '617b0222-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': 'c861984ad0f982f6fecf7e4eb1b40e26ce60d5531be0143273f81bf7af02729b'}]}, 'timestamp': '2025-10-05 10:16:38.907786', '_unique_id': '304d4ae7af4e468daa1b47c54a83904b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.908 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.909 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.926 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/cpu volume: 20000000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '457764da-3bd3-4ea9-879e-82b135dcc1f3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20000000000, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:16:38.909935', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '617df6b2-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.150468269, 'message_signature': '4c1e619c0b85b8ff7280ca3292fdaa54324916d3c2cfd20c6d2c6af73c995d31'}]}, 'timestamp': '2025-10-05 10:16:38.927140', '_unique_id': 'e8fea416186945ec8034b1ccc63267d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.928 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.929 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.929 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a03c9df-614e-4c6c-ab54-326aa08413ff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.929292', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '617e5dc8-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': '1079dd73671ba0a33b16d800a627753345ee227c63843930de60f55c4c3d5b14'}]}, 'timestamp': '2025-10-05 10:16:38.929791', '_unique_id': '75985ee12ebf430aabf4ea61de40bd74'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.930 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.931 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.931 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.bytes volume: 10162 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '578df787-0f5d-4287-bdf8-3b0562e0fbcc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 10162, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.931887', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '617ec164-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': 'bd3bcdf9ee0bc0770636352aff6690d8c9e7b387729c69e0a60f8d2db7d8e2ed'}]}, 'timestamp': '2025-10-05 10:16:38.932338', '_unique_id': '9e1850413fbe4053b430f2403e1fbc7b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.933 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.934 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.934 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8c3b4c52-510c-4b6e-bffe-e845c8f8c426', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.934508', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '617f2834-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': 'e273be44af4389ab1894cde066a9aec64179c79cd54e5e0011ca021c12e6e2ce'}]}, 'timestamp': '2025-10-05 10:16:38.934968', '_unique_id': '8fcbb08813994626aed48ad7ab85230d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.935 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.936 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.937 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e6c65218-3cf1-4939-8598-4e77b7165e42', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.937070', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '617f8be4-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': 'f870531f9488dbd9170dcca7dc549292d2c782c00d8c7d8df41f7088e3884ffc'}]}, 'timestamp': '2025-10-05 10:16:38.937557', '_unique_id': 'a0e9eab3a1f74693a82a6f7ac5702c39'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.938 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.939 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.939 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 1365860654 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.940 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.write.latency volume: 26548503 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cb9106e8-6195-42da-a592-2e1ada17fe4d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1365860654, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.939599', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '617fee68-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '88f902aa1cc8f75d3846d42d0b72698e62944ae869b9786abf010b5fc8630982'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26548503, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.939599', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '617ffe44-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '2b792d32d4a0d9db3588dd9db5518173b11e40332825754d110a54d2c785cb88'}]}, 'timestamp': '2025-10-05 10:16:38.940449', '_unique_id': '63e1f89048d34ba4af81cb373efd916f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.941 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.942 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.942 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.942 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.942 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da55cc31-bbbe-44f3-be87-1f24b12a5d82', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': 'instance-00000002-2b20c302-a8d1-4ee0-990b-24973ca23df1-tap4db5c636-30', 'timestamp': '2025-10-05T10:16:38.942854', 'resource_metadata': {'display_name': 'test', 'name': 'tap4db5c636-30', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:a6:2c:a3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4db5c636-30'}, 'message_id': '61806e2e-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.090918762, 'message_signature': '25501682479b6725800f3184a9bab33fa8b8554955745b85fcbbb96727d4278a'}]}, 'timestamp': '2025-10-05 10:16:38.943312', '_unique_id': '359d1ecc9a3f490287398ebef6392a88'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.944 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.945 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.946 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57274ee1-2a79-4c93-95ae-b24b8c808e62', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.945717', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6180ddfa-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.115951779, 'message_signature': '3023f4edcf95651a0ffb63cf2eab74219a015d92db7a0042a1f79f62e21e69f3'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.945717', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6180edcc-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.115951779, 'message_signature': '3ab46b8cfb31d0378b1caa65db8745c8757cd78a7cb7e4c682d68b0290b9945e'}]}, 'timestamp': '2025-10-05 10:16:38.946590', '_unique_id': '4289e990803847b4bbf6f4aace2ce5d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.947 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.948 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.948 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 1340116149 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.949 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.latency volume: 86064139 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a3701eff-9881-440f-9071-9d21589f4301', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1340116149, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.948688', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '61815168-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '42f1bbdd8e84fd8b73ce3c568509d77b5a1d6677fe1184643599431c934ed456'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 86064139, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.948688', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '61816130-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': '90abeb47c84fa584814e5fa9e2c970831ed4defe9b9a86cc6f26cdc74efd9e0b'}]}, 'timestamp': '2025-10-05 10:16:38.949543', '_unique_id': '0b4a2c750d9d46d195a729f5a3649e55'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.950 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.951 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.952 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e080dbab-42ba-44f2-982e-86a80fc4b875', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.952115', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6181d76e-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.115951779, 'message_signature': 'b0cb80060814e9ff90771f6ac34f5e768364ec86567b30c01d4a142eb9f3a637'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.952115', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6181ea88-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.115951779, 'message_signature': '1ef0229a6427aeb11bbf55166428c95a779503b53b195582524d0fd53c0ebc5f'}]}, 'timestamp': '2025-10-05 10:16:38.953024', '_unique_id': '57280b44e2614ec3a30194e041c7a7ae'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.953 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.955 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.955 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/memory.usage volume: 51.62109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd4b6d74c-f89f-4741-99f6-2e3240da9ce9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.62109375, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'timestamp': '2025-10-05T10:16:38.955197', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '618250fe-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.150468269, 'message_signature': '710c5a7e9b5564ed806e8895ef7ea9d005219d2c60f6328c0d8a8a4bbf7db7ff'}]}, 'timestamp': '2025-10-05 10:16:38.955659', '_unique_id': 'f104f8e069244394a4198bbef7e54c8a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.956 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.959 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.959 12 DEBUG ceilometer.compute.pollsters [-] 2b20c302-a8d1-4ee0-990b-24973ca23df1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '08f06a93-695d-421c-bfb5-4af4bc01b963', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vda', 'timestamp': '2025-10-05T10:16:38.959466', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6182f798-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': 'e6f556887f2eaa2be4778bb0597edf750d353a625c80468d5583c4699d12118e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '8d17cd5027274bc5883e2354d4ddec6b', 'user_name': None, 'project_id': '8b36437b65444bcdac75beef77b6981e', 'project_name': None, 'resource_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1-vdb', 'timestamp': '2025-10-05T10:16:38.959466', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '2b20c302-a8d1-4ee0-990b-24973ca23df1', 'instance_type': 'm1.small', 'host': 'e59fcb99e48470d5f5214c433ce33ffc4ac90b0201b5cf21dc19e188', 'instance_host': 'np0005471150.localdomain', 'flavor': {'id': '76acf371-9e6c-4c5c-aec4-748e712efe27', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219'}, 'image_ref': 'e521096d-c3e6-4c8e-9ba6-a35f9a80b219', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '61830a58-a1d4-11f0-9396-fa163ec6f33d', 'monotonic_time': 12878.066606024, 'message_signature': 'd23f1158f9e67b6cd3491721381d4830aa85693978f9a63b216ee0828fe55525'}]}, 'timestamp': '2025-10-05 10:16:38.960423', '_unique_id': 'a642964a4458401cbdefa74d3cebfb5c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:16:38 localhost ceilometer_agent_compute[245788]: 2025-10-05 10:16:38.961 12 ERROR oslo_messaging.notify.messaging Oct 5 06:16:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e275 do_prune osdmap full prune enabled Oct 5 06:16:40 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e276 e276: 6 total, 6 up, 6 in Oct 5 06:16:40 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e276: 6 total, 6 up, 6 in Oct 5 06:16:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:16:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:16:41 localhost podman[342272]: 2025-10-05 10:16:41.702494347 +0000 UTC m=+0.099325698 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:16:41 localhost podman[342272]: 2025-10-05 10:16:41.743005276 +0000 UTC m=+0.139836647 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:16:41 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:16:41 localhost podman[342273]: 2025-10-05 10:16:41.799729257 +0000 UTC m=+0.196590960 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd) Oct 5 06:16:41 localhost podman[342273]: 2025-10-05 10:16:41.812291353 +0000 UTC m=+0.209153055 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd) Oct 5 06:16:41 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:16:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:42 localhost nova_compute[297021]: 2025-10-05 10:16:42.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:42 localhost nova_compute[297021]: 2025-10-05 10:16:42.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:43 localhost ovn_controller[157794]: 2025-10-05T10:16:43Z|00425|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:16:43 localhost nova_compute[297021]: 2025-10-05 10:16:43.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:43 localhost podman[342454]: Oct 5 06:16:43 localhost podman[342454]: 2025-10-05 10:16:43.479612682 +0000 UTC m=+0.076893850 container create 49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_blackwell, io.openshift.expose-services=, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, ceph=True, CEPH_POINT_RELEASE=) Oct 5 06:16:43 localhost systemd[1]: Started libpod-conmon-49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0.scope. Oct 5 06:16:43 localhost podman[342454]: 2025-10-05 10:16:43.448362499 +0000 UTC m=+0.045643717 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 06:16:43 localhost systemd[1]: Started libcrun container. Oct 5 06:16:43 localhost podman[342454]: 2025-10-05 10:16:43.567942536 +0000 UTC m=+0.165223704 container init 49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_blackwell, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, release=553, io.buildah.version=1.33.12) Oct 5 06:16:43 localhost podman[342454]: 2025-10-05 10:16:43.580996873 +0000 UTC m=+0.178278051 container start 49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_blackwell, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_BRANCH=main, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Oct 5 06:16:43 localhost podman[342454]: 2025-10-05 10:16:43.581592659 +0000 UTC m=+0.178873837 container attach 49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_blackwell, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 06:16:43 localhost systemd[1]: libpod-49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0.scope: Deactivated successfully. Oct 5 06:16:43 localhost magical_blackwell[342469]: 167 167 Oct 5 06:16:43 localhost podman[342454]: 2025-10-05 10:16:43.587696942 +0000 UTC m=+0.184978130 container died 49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_blackwell, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, release=553, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True) Oct 5 06:16:43 localhost podman[342474]: 2025-10-05 10:16:43.720883511 +0000 UTC m=+0.092706321 container remove 49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_blackwell, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 5 06:16:43 localhost systemd[1]: libpod-conmon-49eaf302641c6eca2022c00acb9044286772e4fcff717160984baa9456b8ccf0.scope: Deactivated successfully. Oct 5 06:16:43 localhost podman[342498]: Oct 5 06:16:43 localhost podman[342498]: 2025-10-05 10:16:43.948455875 +0000 UTC m=+0.076714075 container create 6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_liskov, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, release=553, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 06:16:43 localhost systemd[1]: Started libpod-conmon-6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407.scope. Oct 5 06:16:44 localhost systemd[1]: Started libcrun container. Oct 5 06:16:44 localhost podman[342498]: 2025-10-05 10:16:43.918569949 +0000 UTC m=+0.046828159 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 06:16:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b31226266d2b9ff5b697547ac2e0f0250b6b6949687bad3484b7b405f13d9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b31226266d2b9ff5b697547ac2e0f0250b6b6949687bad3484b7b405f13d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b31226266d2b9ff5b697547ac2e0f0250b6b6949687bad3484b7b405f13d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b31226266d2b9ff5b697547ac2e0f0250b6b6949687bad3484b7b405f13d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:44 localhost podman[342498]: 2025-10-05 10:16:44.025629022 +0000 UTC m=+0.153887222 container init 6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_liskov, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Oct 5 06:16:44 localhost podman[342498]: 2025-10-05 10:16:44.043430576 +0000 UTC m=+0.171688786 container start 6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_liskov, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, RELEASE=main, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 06:16:44 localhost podman[342498]: 2025-10-05 10:16:44.043706713 +0000 UTC m=+0.171964953 container attach 6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_liskov, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Oct 5 06:16:44 localhost systemd[1]: var-lib-containers-storage-overlay-6823898ec12642fbf7e6cbc46757af5a3e56f7c5eb4a5caa12581f21ec2869a9-merged.mount: Deactivated successfully. Oct 5 06:16:44 localhost podman[342901]: 2025-10-05 10:16:44.779763497 +0000 UTC m=+0.057300137 container kill d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 06:16:44 localhost dnsmasq[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/addn_hosts - 0 addresses Oct 5 06:16:44 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/host Oct 5 06:16:44 localhost dnsmasq-dhcp[342193]: read /var/lib/neutron/dhcp/734787ee-36a6-4ce9-8b17-c68682929480/opts Oct 5 06:16:45 localhost ovn_controller[157794]: 2025-10-05T10:16:45Z|00426|binding|INFO|Releasing lport 769fc590-a8be-41e6-bffd-918cbd7af764 from this chassis (sb_readonly=0) Oct 5 06:16:45 localhost kernel: device tap769fc590-a8 left promiscuous mode Oct 5 06:16:45 localhost ovn_controller[157794]: 2025-10-05T10:16:45Z|00427|binding|INFO|Setting lport 769fc590-a8be-41e6-bffd-918cbd7af764 down in Southbound Oct 5 06:16:45 localhost nova_compute[297021]: 2025-10-05 10:16:45.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:45.037 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471150.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp13068dad-f321-552c-93d3-b8ac5e3ff7c1-734787ee-36a6-4ce9-8b17-c68682929480', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-734787ee-36a6-4ce9-8b17-c68682929480', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4434dbcfa4564256a8905a21cefe3cee', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471150.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=033c0a4f-0b91-42ee-ae39-aca6a965e265, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=769fc590-a8be-41e6-bffd-918cbd7af764) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:16:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:45.040 163434 INFO neutron.agent.ovn.metadata.agent [-] Port 769fc590-a8be-41e6-bffd-918cbd7af764 in datapath 734787ee-36a6-4ce9-8b17-c68682929480 unbound from our chassis#033[00m Oct 5 06:16:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:45.042 163434 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 734787ee-36a6-4ce9-8b17-c68682929480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:16:45 localhost nova_compute[297021]: 2025-10-05 10:16:45.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:45 localhost ovn_metadata_agent[163429]: 2025-10-05 10:16:45.044 163567 DEBUG oslo.privsep.daemon [-] privsep: reply[70fbeca5-5420-4fbc-ae57-19d731edfefe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:16:45 localhost nova_compute[297021]: 2025-10-05 10:16:45.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost tender_liskov[342513]: [ Oct 5 06:16:45 localhost tender_liskov[342513]: { Oct 5 06:16:45 localhost tender_liskov[342513]: "available": false, Oct 5 06:16:45 localhost tender_liskov[342513]: "ceph_device": false, Oct 5 06:16:45 localhost tender_liskov[342513]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 06:16:45 localhost tender_liskov[342513]: "lsm_data": {}, Oct 5 06:16:45 localhost tender_liskov[342513]: "lvs": [], Oct 5 06:16:45 localhost tender_liskov[342513]: "path": "/dev/sr0", Oct 5 06:16:45 localhost tender_liskov[342513]: "rejected_reasons": [ Oct 5 06:16:45 localhost tender_liskov[342513]: "Insufficient space (<5GB)", Oct 5 06:16:45 localhost tender_liskov[342513]: "Has a FileSystem" Oct 5 06:16:45 localhost tender_liskov[342513]: ], Oct 5 06:16:45 localhost tender_liskov[342513]: "sys_api": { Oct 5 06:16:45 localhost tender_liskov[342513]: "actuators": null, Oct 5 06:16:45 localhost tender_liskov[342513]: "device_nodes": "sr0", Oct 5 06:16:45 localhost tender_liskov[342513]: "human_readable_size": "482.00 KB", Oct 5 06:16:45 localhost tender_liskov[342513]: "id_bus": "ata", Oct 5 06:16:45 localhost tender_liskov[342513]: "model": "QEMU DVD-ROM", Oct 5 06:16:45 localhost tender_liskov[342513]: "nr_requests": "2", Oct 5 06:16:45 localhost tender_liskov[342513]: "partitions": {}, Oct 5 06:16:45 localhost tender_liskov[342513]: "path": "/dev/sr0", Oct 5 06:16:45 localhost tender_liskov[342513]: "removable": "1", Oct 5 06:16:45 localhost tender_liskov[342513]: "rev": "2.5+", Oct 5 06:16:45 localhost tender_liskov[342513]: "ro": "0", Oct 5 06:16:45 localhost tender_liskov[342513]: "rotational": "1", Oct 5 06:16:45 localhost tender_liskov[342513]: "sas_address": "", Oct 5 06:16:45 localhost tender_liskov[342513]: "sas_device_handle": "", Oct 5 06:16:45 localhost tender_liskov[342513]: "scheduler_mode": "mq-deadline", Oct 5 06:16:45 localhost tender_liskov[342513]: "sectors": 0, Oct 5 06:16:45 localhost tender_liskov[342513]: "sectorsize": "2048", Oct 5 06:16:45 localhost tender_liskov[342513]: "size": 493568.0, Oct 5 06:16:45 localhost tender_liskov[342513]: "support_discard": "0", Oct 5 06:16:45 localhost tender_liskov[342513]: "type": "disk", Oct 5 06:16:45 localhost tender_liskov[342513]: "vendor": "QEMU" Oct 5 06:16:45 localhost tender_liskov[342513]: } Oct 5 06:16:45 localhost tender_liskov[342513]: } Oct 5 06:16:45 localhost tender_liskov[342513]: ] Oct 5 06:16:45 localhost systemd[1]: libpod-6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407.scope: Deactivated successfully. Oct 5 06:16:45 localhost systemd[1]: libpod-6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407.scope: Consumed 1.200s CPU time. Oct 5 06:16:45 localhost podman[342498]: 2025-10-05 10:16:45.249076493 +0000 UTC m=+1.377334653 container died 6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_liskov, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, version=7, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 06:16:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost systemd[1]: var-lib-containers-storage-overlay-5d3b31226266d2b9ff5b697547ac2e0f0250b6b6949687bad3484b7b405f13d9-merged.mount: Deactivated successfully. Oct 5 06:16:45 localhost podman[344752]: 2025-10-05 10:16:45.348257896 +0000 UTC m=+0.088752186 container remove 6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_liskov, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main) Oct 5 06:16:45 localhost systemd[1]: libpod-conmon-6a6e1734c77fceef5a208ff49d7276453c473fdb6d86bf6990fa969f4da9f407.scope: Deactivated successfully. Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:16:45 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ovn_controller[157794]: 2025-10-05T10:16:45Z|00428|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:16:45 localhost nova_compute[297021]: 2025-10-05 10:16:45.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:46 localhost podman[344801]: 2025-10-05 10:16:46.431732458 +0000 UTC m=+0.062929259 container kill d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:16:46 localhost dnsmasq[342193]: exiting on receipt of SIGTERM Oct 5 06:16:46 localhost systemd[1]: libpod-d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51.scope: Deactivated successfully. Oct 5 06:16:46 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:16:46 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost podman[344813]: 2025-10-05 10:16:46.508350029 +0000 UTC m=+0.060617766 container died d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:16:46 localhost systemd[1]: tmp-crun.QZjvLT.mount: Deactivated successfully. Oct 5 06:16:46 localhost podman[344813]: 2025-10-05 10:16:46.554660533 +0000 UTC m=+0.106928220 container cleanup d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:16:46 localhost systemd[1]: libpod-conmon-d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51.scope: Deactivated successfully. Oct 5 06:16:46 localhost podman[344815]: 2025-10-05 10:16:46.638157569 +0000 UTC m=+0.182293650 container remove d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-734787ee-36a6-4ce9-8b17-c68682929480, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:16:46 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:46.665 272040 INFO neutron.agent.dhcp.agent [None req-76282658-9606-4216-abe5-b88aeec6f78f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:16:46 localhost neutron_dhcp_agent[272036]: 2025-10-05 10:16:46.666 272040 INFO neutron.agent.dhcp.agent [None req-76282658-9606-4216-abe5-b88aeec6f78f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:16:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:16:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e276 do_prune osdmap full prune enabled Oct 5 06:16:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 e277: 6 total, 6 up, 6 in Oct 5 06:16:47 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : osdmap e277: 6 total, 6 up, 6 in Oct 5 06:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:16:47 localhost podman[344841]: 2025-10-05 10:16:47.426673129 +0000 UTC m=+0.083282740 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Oct 5 06:16:47 localhost systemd[1]: var-lib-containers-storage-overlay-eb213e9bdddc73147d6e86e5ce92ca89050efaa2b9dd00a881077c78f9dbbba4-merged.mount: Deactivated successfully. Oct 5 06:16:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d81b4f6c7570d33658b9b111e1eca4d62c67b739169514172e0509b71e619a51-userdata-shm.mount: Deactivated successfully. Oct 5 06:16:47 localhost systemd[1]: run-netns-qdhcp\x2d734787ee\x2d36a6\x2d4ce9\x2d8b17\x2dc68682929480.mount: Deactivated successfully. Oct 5 06:16:47 localhost podman[344841]: 2025-10-05 10:16:47.440658182 +0000 UTC m=+0.097267733 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:16:47 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:16:47 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:47 localhost nova_compute[297021]: 2025-10-05 10:16:47.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:16:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:16:50 localhost podman[344859]: 2025-10-05 10:16:50.671707619 +0000 UTC m=+0.078698278 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:16:50 localhost podman[344860]: 2025-10-05 10:16:50.73328867 +0000 UTC m=+0.137625348 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:16:50 localhost podman[344859]: 2025-10-05 10:16:50.747738136 +0000 UTC m=+0.154728765 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Oct 5 06:16:50 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:16:50 localhost podman[344860]: 2025-10-05 10:16:50.798293163 +0000 UTC m=+0.202629821 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:16:50 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:16:51 localhost podman[248506]: time="2025-10-05T10:16:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:16:51 localhost podman[248506]: @ - - [05/Oct/2025:10:16:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:16:51 localhost podman[248506]: @ - - [05/Oct/2025:10:16:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19401 "" "Go-http-client/1.1" Oct 5 06:16:52 localhost openstack_network_exporter[250601]: ERROR 10:16:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:16:52 localhost openstack_network_exporter[250601]: ERROR 10:16:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:52 localhost openstack_network_exporter[250601]: ERROR 10:16:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:52 localhost openstack_network_exporter[250601]: ERROR 10:16:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:16:52 localhost openstack_network_exporter[250601]: Oct 5 06:16:52 localhost openstack_network_exporter[250601]: ERROR 10:16:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:16:52 localhost openstack_network_exporter[250601]: Oct 5 06:16:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:52 localhost nova_compute[297021]: 2025-10-05 10:16:52.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:16:52 localhost nova_compute[297021]: 2025-10-05 10:16:52.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:54 localhost nova_compute[297021]: 2025-10-05 10:16:54.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:16:55 localhost podman[344902]: 2025-10-05 10:16:55.678437413 +0000 UTC m=+0.081893684 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9) Oct 5 06:16:55 localhost podman[344902]: 2025-10-05 10:16:55.691945562 +0000 UTC m=+0.095401793 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal) Oct 5 06:16:55 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:16:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:57 localhost nova_compute[297021]: 2025-10-05 10:16:57.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:58 localhost ovn_controller[157794]: 2025-10-05T10:16:58Z|00429|binding|INFO|Releasing lport cd4e79ca-7111-4d41-b9b0-672ba46474d1 from this chassis (sb_readonly=0) Oct 5 06:16:59 localhost nova_compute[297021]: 2025-10-05 10:16:59.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:17:00 localhost podman[344922]: 2025-10-05 10:17:00.672600083 +0000 UTC m=+0.082276543 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:17:00 localhost podman[344922]: 2025-10-05 10:17:00.685775244 +0000 UTC m=+0.095451674 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:00 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:17:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:02 localhost nova_compute[297021]: 2025-10-05 10:17:02.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:17:05 localhost podman[344944]: 2025-10-05 10:17:05.684370462 +0000 UTC m=+0.089263519 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:17:05 localhost podman[344944]: 2025-10-05 10:17:05.721900442 +0000 UTC m=+0.126793479 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:17:05 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:17:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:07 localhost nova_compute[297021]: 2025-10-05 10:17:07.879 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:07 localhost nova_compute[297021]: 2025-10-05 10:17:07.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:07 localhost nova_compute[297021]: 2025-10-05 10:17:07.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:07 localhost nova_compute[297021]: 2025-10-05 10:17:07.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:07 localhost nova_compute[297021]: 2025-10-05 10:17:07.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:08 localhost nova_compute[297021]: 2025-10-05 10:17:08.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:08 localhost nova_compute[297021]: 2025-10-05 10:17:08.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:17:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:17:12 localhost systemd[1]: tmp-crun.AX4ODK.mount: Deactivated successfully. Oct 5 06:17:12 localhost podman[344968]: 2025-10-05 10:17:12.697236236 +0000 UTC m=+0.095047584 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:17:12 localhost podman[344967]: 2025-10-05 10:17:12.746152539 +0000 UTC m=+0.147114961 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:17:12 localhost podman[344967]: 2025-10-05 10:17:12.755151669 +0000 UTC m=+0.156114051 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:17:12 localhost podman[344968]: 2025-10-05 10:17:12.765206397 +0000 UTC m=+0.163017715 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:17:12 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:17:12 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:17:13 localhost nova_compute[297021]: 2025-10-05 10:17:13.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:13 localhost nova_compute[297021]: 2025-10-05 10:17:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:13 localhost nova_compute[297021]: 2025-10-05 10:17:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:13 localhost nova_compute[297021]: 2025-10-05 10:17:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:13 localhost nova_compute[297021]: 2025-10-05 10:17:13.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:13 localhost nova_compute[297021]: 2025-10-05 10:17:13.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:17 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:17:17 localhost podman[345006]: 2025-10-05 10:17:17.673254752 +0000 UTC m=+0.082311554 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:17:17 localhost podman[345006]: 2025-10-05 10:17:17.683111825 +0000 UTC m=+0.092168617 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Oct 5 06:17:17 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:17:18 localhost nova_compute[297021]: 2025-10-05 10:17:18.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:18 localhost nova_compute[297021]: 2025-10-05 10:17:18.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:18 localhost nova_compute[297021]: 2025-10-05 10:17:18.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:18 localhost nova_compute[297021]: 2025-10-05 10:17:18.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:18 localhost nova_compute[297021]: 2025-10-05 10:17:18.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:18 localhost nova_compute[297021]: 2025-10-05 10:17:18.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:20 localhost nova_compute[297021]: 2025-10-05 10:17:20.445 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:20 localhost nova_compute[297021]: 2025-10-05 10:17:20.446 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:20 localhost nova_compute[297021]: 2025-10-05 10:17:20.446 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:17:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:17:20.479 163434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:17:20.480 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:20 localhost ovn_metadata_agent[163429]: 2025-10-05 10:17:20.481 163434 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:21 localhost podman[248506]: time="2025-10-05T10:17:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:17:21 localhost podman[248506]: @ - - [05/Oct/2025:10:17:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:17:21 localhost podman[248506]: @ - - [05/Oct/2025:10:17:21 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19396 "" "Go-http-client/1.1" Oct 5 06:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:17:21 localhost podman[345024]: 2025-10-05 10:17:21.682752673 +0000 UTC m=+0.090837801 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:17:21 localhost podman[345025]: 2025-10-05 10:17:21.737751319 +0000 UTC m=+0.141581555 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:17:21 localhost podman[345024]: 2025-10-05 10:17:21.747336374 +0000 UTC m=+0.155421502 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:17:21 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:17:21 localhost podman[345025]: 2025-10-05 10:17:21.803068999 +0000 UTC m=+0.206899225 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:17:21 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:17:22 localhost openstack_network_exporter[250601]: ERROR 10:17:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:17:22 localhost openstack_network_exporter[250601]: ERROR 10:17:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:22 localhost openstack_network_exporter[250601]: ERROR 10:17:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:22 localhost openstack_network_exporter[250601]: ERROR 10:17:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:17:22 localhost openstack_network_exporter[250601]: Oct 5 06:17:22 localhost openstack_network_exporter[250601]: ERROR 10:17:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:17:22 localhost openstack_network_exporter[250601]: Oct 5 06:17:22 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:22 localhost nova_compute[297021]: 2025-10-05 10:17:22.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:22 localhost nova_compute[297021]: 2025-10-05 10:17:22.423 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:23 localhost nova_compute[297021]: 2025-10-05 10:17:23.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:23 localhost nova_compute[297021]: 2025-10-05 10:17:23.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:23 localhost nova_compute[297021]: 2025-10-05 10:17:23.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:23 localhost nova_compute[297021]: 2025-10-05 10:17:23.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:23 localhost nova_compute[297021]: 2025-10-05 10:17:23.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:23 localhost nova_compute[297021]: 2025-10-05 10:17:23.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:24 localhost nova_compute[297021]: 2025-10-05 10:17:24.418 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:17:24.736 163434 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:17:24 localhost ovn_metadata_agent[163429]: 2025-10-05 10:17:24.737 163434 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:17:24 localhost nova_compute[297021]: 2025-10-05 10:17:24.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:17:26 localhost podman[345066]: 2025-10-05 10:17:26.668234491 +0000 UTC m=+0.079769447 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 06:17:26 localhost podman[345066]: 2025-10-05 10:17:26.706644475 +0000 UTC m=+0.118179451 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc.) Oct 5 06:17:26 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:17:27 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:27 localhost nova_compute[297021]: 2025-10-05 10:17:27.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:28 localhost nova_compute[297021]: 2025-10-05 10:17:28.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:28 localhost nova_compute[297021]: 2025-10-05 10:17:28.421 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:29 localhost nova_compute[297021]: 2025-10-05 10:17:29.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:29 localhost nova_compute[297021]: 2025-10-05 10:17:29.422 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:17:30 localhost ovn_controller[157794]: 2025-10-05T10:17:30Z|00430|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Oct 5 06:17:31 localhost nova_compute[297021]: 2025-10-05 10:17:31.440 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:31 localhost nova_compute[297021]: 2025-10-05 10:17:31.441 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:17:31 localhost nova_compute[297021]: 2025-10-05 10:17:31.441 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:17:31 localhost podman[345086]: 2025-10-05 10:17:31.674434731 +0000 UTC m=+0.083601938 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:17:31 localhost podman[345086]: 2025-10-05 10:17:31.710809921 +0000 UTC m=+0.119977108 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:31 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:17:31 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:17:31 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/789770123' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:17:32 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:32 localhost nova_compute[297021]: 2025-10-05 10:17:32.245 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:17:32 localhost nova_compute[297021]: 2025-10-05 10:17:32.245 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquired lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:17:32 localhost nova_compute[297021]: 2025-10-05 10:17:32.245 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Oct 5 06:17:32 localhost nova_compute[297021]: 2025-10-05 10:17:32.246 2 DEBUG nova.objects.instance [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.448 2 DEBUG nova.network.neutron [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updating instance_info_cache with network_info: [{"id": "4db5c636-3094-4e86-9093-8123489e64be", "address": "fa:16:3e:a6:2c:a3", "network": {"id": "20d6a6dc-0f38-4a89-b3fc-56befd04e92f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.56", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "8b36437b65444bcdac75beef77b6981e", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db5c636-30", "ovs_interfaceid": "4db5c636-3094-4e86-9093-8123489e64be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.471 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Releasing lock "refresh_cache-2b20c302-a8d1-4ee0-990b-24973ca23df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.471 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] [instance: 2b20c302-a8d1-4ee0-990b-24973ca23df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.472 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.494 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.495 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.495 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.495 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Auditing locally available compute resources for np0005471150.localdomain (node: np0005471150.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.496 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:17:33 localhost ovn_metadata_agent[163429]: 2025-10-05 10:17:33.738 163434 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3b30d637-702a-429f-9027-888244ff6474, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:17:33 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:17:33 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2530604165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:17:33 localhost nova_compute[297021]: 2025-10-05 10:17:33.977 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.062 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.063 2 DEBUG nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.275 2 WARNING nova.virt.libvirt.driver [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.277 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Hypervisor/Node resource view: name=np0005471150.localdomain free_ram=11014MB free_disk=41.836944580078125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.277 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.277 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.575 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Instance 2b20c302-a8d1-4ee0-990b-24973ca23df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.575 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.576 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Final resource view: name=np0005471150.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:17:34 localhost nova_compute[297021]: 2025-10-05 10:17:34.832 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing inventories for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.010 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating ProviderTree inventory for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.011 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Updating inventory in ProviderTree for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.032 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing aggregate associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.065 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Refreshing trait associations for resource provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c, traits: HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.112 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:17:35 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:17:35 localhost ceph-mon[308154]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3762770152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.534 2 DEBUG oslo_concurrency.processutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.539 2 DEBUG nova.compute.provider_tree [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed in ProviderTree for provider: 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.552 2 DEBUG nova.scheduler.client.report [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Inventory has not changed for provider 8a0ca304-1dcc-49c8-bfba-4b3cf63aba6c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.554 2 DEBUG nova.compute.resource_tracker [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Compute_service record updated for np0005471150.localdomain:np0005471150.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.555 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.556 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.556 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:17:35 localhost nova_compute[297021]: 2025-10-05 10:17:35.568 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:17:36 localhost podman[345153]: 2025-10-05 10:17:36.669487615 +0000 UTC m=+0.080770963 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:36 localhost podman[345153]: 2025-10-05 10:17:36.681871835 +0000 UTC m=+0.093155193 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:36 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:17:37 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.422 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.440 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.867 2 DEBUG oslo_service.periodic_task [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.887 2 DEBUG nova.compute.manager [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Triggering sync for uuid 2b20c302-a8d1-4ee0-990b-24973ca23df1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.888 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Acquiring lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.888 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:37 localhost nova_compute[297021]: 2025-10-05 10:17:37.909 2 DEBUG oslo_concurrency.lockutils [None req-4a3ea716-bef9-4c02-816b-c702b9c48f0f - - - - - -] Lock "2b20c302-a8d1-4ee0-990b-24973ca23df1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:38 localhost nova_compute[297021]: 2025-10-05 10:17:38.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:38 localhost nova_compute[297021]: 2025-10-05 10:17:38.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:38 localhost nova_compute[297021]: 2025-10-05 10:17:38.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5067 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:38 localhost nova_compute[297021]: 2025-10-05 10:17:38.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:38 localhost nova_compute[297021]: 2025-10-05 10:17:38.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:38 localhost nova_compute[297021]: 2025-10-05 10:17:38.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:42 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:43 localhost nova_compute[297021]: 2025-10-05 10:17:43.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90. Oct 5 06:17:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320. Oct 5 06:17:43 localhost podman[345176]: 2025-10-05 10:17:43.673932692 +0000 UTC m=+0.082744995 container health_status 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:17:43 localhost podman[345176]: 2025-10-05 10:17:43.68997298 +0000 UTC m=+0.098785273 container exec_died 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:17:43 localhost systemd[1]: 90d7568835a286f5711eaa53ae841ea7ed088c6b23fd39d83bcc777070c21f90.service: Deactivated successfully. Oct 5 06:17:43 localhost podman[345177]: 2025-10-05 10:17:43.790624852 +0000 UTC m=+0.194778841 container health_status efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:17:43 localhost podman[345177]: 2025-10-05 10:17:43.806874005 +0000 UTC m=+0.211027984 container exec_died efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2) Oct 5 06:17:43 localhost systemd[1]: efced66dde57f903db278a8a1203223437a08f6642d99113dcceddb2a8b48320.service: Deactivated successfully. Oct 5 06:17:46 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:17:46 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:17:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:17:47 localhost ceph-mon[308154]: log_channel(audit) log [INF] : from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:17:47 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:47 localhost ceph-mon[308154]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:17:47 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:17:47 localhost ceph-mon[308154]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:17:48 localhost nova_compute[297021]: 2025-10-05 10:17:48.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:48 localhost nova_compute[297021]: 2025-10-05 10:17:48.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:48 localhost nova_compute[297021]: 2025-10-05 10:17:48.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5016 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:48 localhost nova_compute[297021]: 2025-10-05 10:17:48.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:48 localhost nova_compute[297021]: 2025-10-05 10:17:48.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:48 localhost nova_compute[297021]: 2025-10-05 10:17:48.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa. Oct 5 06:17:48 localhost podman[345301]: 2025-10-05 10:17:48.666749937 +0000 UTC m=+0.076468249 container health_status ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:17:48 localhost podman[345301]: 2025-10-05 10:17:48.697514017 +0000 UTC m=+0.107232299 container exec_died ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:17:48 localhost systemd[1]: ccb8136af89b6c181c995df7856bdf596c599cd903c628d6c5b39bafff2817fa.service: Deactivated successfully. Oct 5 06:17:49 localhost sshd[345319]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:17:51 localhost podman[248506]: time="2025-10-05T10:17:51Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:17:51 localhost podman[248506]: @ - - [05/Oct/2025:10:17:51 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 145678 "" "Go-http-client/1.1" Oct 5 06:17:51 localhost podman[248506]: @ - - [05/Oct/2025:10:17:51 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19402 "" "Go-http-client/1.1" Oct 5 06:17:52 localhost openstack_network_exporter[250601]: ERROR 10:17:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:17:52 localhost openstack_network_exporter[250601]: ERROR 10:17:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:52 localhost openstack_network_exporter[250601]: ERROR 10:17:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:52 localhost openstack_network_exporter[250601]: ERROR 10:17:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:17:52 localhost openstack_network_exporter[250601]: Oct 5 06:17:52 localhost openstack_network_exporter[250601]: ERROR 10:17:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:17:52 localhost openstack_network_exporter[250601]: Oct 5 06:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222. Oct 5 06:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef. Oct 5 06:17:52 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:52 localhost podman[345321]: 2025-10-05 10:17:52.311688694 +0000 UTC m=+0.083349772 container health_status 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:17:52 localhost podman[345321]: 2025-10-05 10:17:52.361560503 +0000 UTC m=+0.133221581 container exec_died 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:17:52 localhost systemd[1]: 1e7b5b6dade3a8e46adb876d4e8468e3e27028982b8a50b920a2b1226aa41222.service: Deactivated successfully. Oct 5 06:17:52 localhost podman[345322]: 2025-10-05 10:17:52.366641248 +0000 UTC m=+0.134763652 container health_status dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:17:52 localhost podman[345322]: 2025-10-05 10:17:52.44777109 +0000 UTC m=+0.215893454 container exec_died dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:17:52 localhost systemd[1]: dc6f280364e3c0e53b3b66bf9130a5576bad05a2ffe329f3b26d48590bc3dcef.service: Deactivated successfully. Oct 5 06:17:53 localhost nova_compute[297021]: 2025-10-05 10:17:53.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:53 localhost nova_compute[297021]: 2025-10-05 10:17:53.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:53 localhost nova_compute[297021]: 2025-10-05 10:17:53.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:53 localhost nova_compute[297021]: 2025-10-05 10:17:53.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:53 localhost nova_compute[297021]: 2025-10-05 10:17:53.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:53 localhost nova_compute[297021]: 2025-10-05 10:17:53.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:57 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329. Oct 5 06:17:57 localhost podman[345364]: 2025-10-05 10:17:57.665739465 +0000 UTC m=+0.076758296 container health_status 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm) Oct 5 06:17:57 localhost podman[345364]: 2025-10-05 10:17:57.683453096 +0000 UTC m=+0.094471927 container exec_died 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container) Oct 5 06:17:57 localhost systemd[1]: 2f7f7dc882e6c94c827ff53cd46aca9509e88f485d2b9738a10fbf7f421aa329.service: Deactivated successfully. Oct 5 06:17:57 localhost sshd[345385]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:17:58 localhost systemd-logind[760]: New session 80 of user zuul. Oct 5 06:17:58 localhost systemd[1]: Started Session 80 of User zuul. Oct 5 06:17:58 localhost nova_compute[297021]: 2025-10-05 10:17:58.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:58 localhost nova_compute[297021]: 2025-10-05 10:17:58.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:17:58 localhost nova_compute[297021]: 2025-10-05 10:17:58.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:17:58 localhost nova_compute[297021]: 2025-10-05 10:17:58.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:58 localhost nova_compute[297021]: 2025-10-05 10:17:58.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:58 localhost nova_compute[297021]: 2025-10-05 10:17:58.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:17:58 localhost python3[345407]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163efc-24cc-abfa-20d5-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 06:18:00 localhost systemd[1]: session-80.scope: Deactivated successfully. Oct 5 06:18:00 localhost systemd-logind[760]: Session 80 logged out. Waiting for processes to exit. Oct 5 06:18:00 localhost systemd-logind[760]: Removed session 80. Oct 5 06:18:02 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd. Oct 5 06:18:02 localhost systemd[1]: tmp-crun.mh03Ez.mount: Deactivated successfully. Oct 5 06:18:02 localhost podman[345411]: 2025-10-05 10:18:02.691799504 +0000 UTC m=+0.096944224 container health_status 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:18:02 localhost podman[345411]: 2025-10-05 10:18:02.702609782 +0000 UTC m=+0.107754492 container exec_died 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:18:02 localhost systemd[1]: 9c29918ca8c7c98d7732854c26d985efa735c6574399dd911830a4601209cbfd.service: Deactivated successfully. Oct 5 06:18:03 localhost nova_compute[297021]: 2025-10-05 10:18:03.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:18:03 localhost nova_compute[297021]: 2025-10-05 10:18:03.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:18:03 localhost nova_compute[297021]: 2025-10-05 10:18:03.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:18:03 localhost nova_compute[297021]: 2025-10-05 10:18:03.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:18:03 localhost nova_compute[297021]: 2025-10-05 10:18:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:03 localhost nova_compute[297021]: 2025-10-05 10:18:03.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:18:07 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8. Oct 5 06:18:07 localhost podman[345434]: 2025-10-05 10:18:07.672746782 +0000 UTC m=+0.077456445 container health_status fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:18:07 localhost podman[345434]: 2025-10-05 10:18:07.683051887 +0000 UTC m=+0.087761530 container exec_died fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:18:07 localhost systemd[1]: fdb6a991d19b7b76bf6adff9be91f7a11eba3080fd6322dcbb3b2f77b9d36aa8.service: Deactivated successfully. Oct 5 06:18:08 localhost nova_compute[297021]: 2025-10-05 10:18:08.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:18:08 localhost nova_compute[297021]: 2025-10-05 10:18:08.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:18:08 localhost nova_compute[297021]: 2025-10-05 10:18:08.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:18:08 localhost nova_compute[297021]: 2025-10-05 10:18:08.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:18:08 localhost nova_compute[297021]: 2025-10-05 10:18:08.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:08 localhost nova_compute[297021]: 2025-10-05 10:18:08.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:18:12 localhost ceph-mon[308154]: mon.np0005471150@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:13 localhost sshd[345457]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:18:13 localhost ceph-mon[308154]: log_channel(cluster) log [DBG] : mgrmap e47: np0005471152.kbhlus(active, since 19m), standbys: np0005471150.zwqxye, np0005471151.jecxod Oct 5 06:18:13 localhost systemd-logind[760]: New session 81 of user zuul. Oct 5 06:18:13 localhost nova_compute[297021]: 2025-10-05 10:18:13.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:18:13 localhost nova_compute[297021]: 2025-10-05 10:18:13.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:18:13 localhost nova_compute[297021]: 2025-10-05 10:18:13.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Oct 5 06:18:13 localhost nova_compute[297021]: 2025-10-05 10:18:13.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:18:13 localhost nova_compute[297021]: 2025-10-05 10:18:13.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:13 localhost nova_compute[297021]: 2025-10-05 10:18:13.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:18:13 localhost systemd[1]: Started Session 81 of User zuul.